2025-11-01 12:06:51.549868 | Job console starting 2025-11-01 12:06:51.563184 | Updating git repos 2025-11-01 12:06:51.684661 | Cloning repos into workspace 2025-11-01 12:06:51.880250 | Restoring repo states 2025-11-01 12:06:51.897701 | Merging changes 2025-11-01 12:06:51.897721 | Checking out repos 2025-11-01 12:06:52.320461 | Preparing playbooks 2025-11-01 12:06:52.901642 | Running Ansible setup 2025-11-01 12:06:56.921010 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-11-01 12:06:57.638409 | 2025-11-01 12:06:57.638631 | PLAY [Base pre] 2025-11-01 12:06:57.655363 | 2025-11-01 12:06:57.655489 | TASK [Setup log path fact] 2025-11-01 12:06:57.675064 | orchestrator | ok 2025-11-01 12:06:57.692011 | 2025-11-01 12:06:57.692145 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-11-01 12:06:57.720896 | orchestrator | ok 2025-11-01 12:06:57.732505 | 2025-11-01 12:06:57.732612 | TASK [emit-job-header : Print job information] 2025-11-01 12:06:57.771820 | # Job Information 2025-11-01 12:06:57.771998 | Ansible Version: 2.16.14 2025-11-01 12:06:57.772034 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-11-01 12:06:57.772068 | Pipeline: post 2025-11-01 12:06:57.772091 | Executor: 521e9411259a 2025-11-01 12:06:57.772112 | Triggered by: https://github.com/osism/testbed/commit/2000cfe2c86255cc0d5aed45a5153689fe4ce916 2025-11-01 12:06:57.772134 | Event ID: 3bb7d8e2-b71b-11f0-8655-63a270f132cf 2025-11-01 12:06:57.778715 | 2025-11-01 12:06:57.778826 | LOOP [emit-job-header : Print node information] 2025-11-01 12:06:57.900923 | orchestrator | ok: 2025-11-01 12:06:57.901201 | orchestrator | # Node Information 2025-11-01 12:06:57.901256 | orchestrator | Inventory Hostname: orchestrator 2025-11-01 12:06:57.901298 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-11-01 12:06:57.901335 | orchestrator | Username: zuul-testbed05 2025-11-01 12:06:57.901369 | orchestrator | Distro: Debian 12.12 2025-11-01 12:06:57.901408 | orchestrator | Provider: static-testbed 2025-11-01 12:06:57.901443 | orchestrator | Region: 2025-11-01 12:06:57.901478 | orchestrator | Label: testbed-orchestrator 2025-11-01 12:06:57.901509 | orchestrator | Product Name: OpenStack Nova 2025-11-01 12:06:57.901541 | orchestrator | Interface IP: 81.163.193.140 2025-11-01 12:06:57.918402 | 2025-11-01 12:06:57.918521 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-11-01 12:06:58.387390 | orchestrator -> localhost | changed 2025-11-01 12:06:58.404227 | 2025-11-01 12:06:58.404397 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-11-01 12:06:59.391645 | orchestrator -> localhost | changed 2025-11-01 12:06:59.414193 | 2025-11-01 12:06:59.414330 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-11-01 12:06:59.661386 | orchestrator -> localhost | ok 2025-11-01 12:06:59.676001 | 2025-11-01 12:06:59.676171 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-11-01 12:06:59.714760 | orchestrator | ok 2025-11-01 12:06:59.735469 | orchestrator | included: /var/lib/zuul/builds/0479b91f6fbb44c9a2b8080c3e05ad70/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-11-01 12:06:59.743470 | 2025-11-01 12:06:59.743571 | TASK [add-build-sshkey : Create Temp SSH key] 2025-11-01 12:07:00.726553 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-11-01 12:07:00.727128 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/0479b91f6fbb44c9a2b8080c3e05ad70/work/0479b91f6fbb44c9a2b8080c3e05ad70_id_rsa 2025-11-01 12:07:00.727238 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/0479b91f6fbb44c9a2b8080c3e05ad70/work/0479b91f6fbb44c9a2b8080c3e05ad70_id_rsa.pub 2025-11-01 12:07:00.727314 | orchestrator -> localhost | The key fingerprint is: 2025-11-01 12:07:00.727391 | orchestrator -> localhost | SHA256:L2fT6dZv1MCfa5jKq0zfKK/AozkgBDVFkxhGqHknbMo zuul-build-sshkey 2025-11-01 12:07:00.727456 | orchestrator -> localhost | The key's randomart image is: 2025-11-01 12:07:00.727537 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-11-01 12:07:00.727599 | orchestrator -> localhost | | +*=+. | 2025-11-01 12:07:00.727661 | orchestrator -> localhost | |o..... | 2025-11-01 12:07:00.727718 | orchestrator -> localhost | |oo . | 2025-11-01 12:07:00.727773 | orchestrator -> localhost | |o.= . o | 2025-11-01 12:07:00.727830 | orchestrator -> localhost | |o+ o S oo| 2025-11-01 12:07:00.727891 | orchestrator -> localhost | |.E . . . . . .+| 2025-11-01 12:07:00.727947 | orchestrator -> localhost | | . . +..= o.o..| 2025-11-01 12:07:00.728031 | orchestrator -> localhost | | .o ==oo+o.o.| 2025-11-01 12:07:00.728092 | orchestrator -> localhost | | o. +=O+..o.| 2025-11-01 12:07:00.728151 | orchestrator -> localhost | +----[SHA256]-----+ 2025-11-01 12:07:00.728289 | orchestrator -> localhost | ok: Runtime: 0:00:00.492229 2025-11-01 12:07:00.743208 | 2025-11-01 12:07:00.743356 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-11-01 12:07:00.765881 | orchestrator | ok 2025-11-01 12:07:00.776573 | orchestrator | included: /var/lib/zuul/builds/0479b91f6fbb44c9a2b8080c3e05ad70/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-11-01 12:07:00.785437 | 2025-11-01 12:07:00.785531 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-11-01 12:07:00.808716 | orchestrator | skipping: Conditional result was False 2025-11-01 12:07:00.817300 | 2025-11-01 12:07:00.817401 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-11-01 12:07:01.374107 | orchestrator | changed 2025-11-01 12:07:01.380499 | 2025-11-01 12:07:01.380615 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-11-01 12:07:01.650157 | orchestrator | ok 2025-11-01 12:07:01.659930 | 2025-11-01 12:07:01.660123 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-11-01 12:07:02.529038 | orchestrator | ok 2025-11-01 12:07:02.543416 | 2025-11-01 12:07:02.543652 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-11-01 12:07:02.946918 | orchestrator | ok 2025-11-01 12:07:02.954453 | 2025-11-01 12:07:02.954573 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-11-01 12:07:02.978763 | orchestrator | skipping: Conditional result was False 2025-11-01 12:07:02.986092 | 2025-11-01 12:07:02.986202 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-11-01 12:07:03.405723 | orchestrator -> localhost | changed 2025-11-01 12:07:03.430234 | 2025-11-01 12:07:03.430381 | TASK [add-build-sshkey : Add back temp key] 2025-11-01 12:07:03.758705 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/0479b91f6fbb44c9a2b8080c3e05ad70/work/0479b91f6fbb44c9a2b8080c3e05ad70_id_rsa (zuul-build-sshkey) 2025-11-01 12:07:03.759339 | orchestrator -> localhost | ok: Runtime: 0:00:00.019854 2025-11-01 12:07:03.774426 | 2025-11-01 12:07:03.774573 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-11-01 12:07:04.193629 | orchestrator | ok 2025-11-01 12:07:04.201202 | 2025-11-01 12:07:04.201329 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-11-01 12:07:04.224992 | orchestrator | skipping: Conditional result was False 2025-11-01 12:07:04.272865 | 2025-11-01 12:07:04.273007 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-11-01 12:07:04.657929 | orchestrator | ok 2025-11-01 12:07:04.672604 | 2025-11-01 12:07:04.672748 | TASK [validate-host : Define zuul_info_dir fact] 2025-11-01 12:07:04.714545 | orchestrator | ok 2025-11-01 12:07:04.724923 | 2025-11-01 12:07:04.725067 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-11-01 12:07:05.024212 | orchestrator -> localhost | ok 2025-11-01 12:07:05.040863 | 2025-11-01 12:07:05.041082 | TASK [validate-host : Collect information about the host] 2025-11-01 12:07:06.166409 | orchestrator | ok 2025-11-01 12:07:06.187927 | 2025-11-01 12:07:06.188105 | TASK [validate-host : Sanitize hostname] 2025-11-01 12:07:06.242123 | orchestrator | ok 2025-11-01 12:07:06.250422 | 2025-11-01 12:07:06.250563 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-11-01 12:07:06.792375 | orchestrator -> localhost | changed 2025-11-01 12:07:06.804766 | 2025-11-01 12:07:06.804921 | TASK [validate-host : Collect information about zuul worker] 2025-11-01 12:07:07.229271 | orchestrator | ok 2025-11-01 12:07:07.241196 | 2025-11-01 12:07:07.241528 | TASK [validate-host : Write out all zuul information for each host] 2025-11-01 12:07:07.764077 | orchestrator -> localhost | changed 2025-11-01 12:07:07.781386 | 2025-11-01 12:07:07.781526 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-11-01 12:07:08.060379 | orchestrator | ok 2025-11-01 12:07:08.069654 | 2025-11-01 12:07:08.069792 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-11-01 12:07:49.895278 | orchestrator | changed: 2025-11-01 12:07:49.895461 | orchestrator | .d..t...... src/ 2025-11-01 12:07:49.895496 | orchestrator | .d..t...... src/github.com/ 2025-11-01 12:07:49.895522 | orchestrator | .d..t...... src/github.com/osism/ 2025-11-01 12:07:49.895545 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-11-01 12:07:49.895566 | orchestrator | RedHat.yml 2025-11-01 12:07:49.909122 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-11-01 12:07:49.909140 | orchestrator | RedHat.yml 2025-11-01 12:07:49.909192 | orchestrator | = 2.2.0"... 2025-11-01 12:08:01.500256 | orchestrator | 12:08:01.500 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-11-01 12:08:01.517620 | orchestrator | 12:08:01.517 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-11-01 12:08:01.664982 | orchestrator | 12:08:01.664 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-11-01 12:08:02.050924 | orchestrator | 12:08:02.050 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-11-01 12:08:02.313682 | orchestrator | 12:08:02.313 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-11-01 12:08:02.900282 | orchestrator | 12:08:02.900 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-11-01 12:08:03.378639 | orchestrator | 12:08:03.378 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-11-01 12:08:04.257424 | orchestrator | 12:08:04.257 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-11-01 12:08:04.257485 | orchestrator | 12:08:04.257 STDOUT terraform: Providers are signed by their developers. 2025-11-01 12:08:04.257532 | orchestrator | 12:08:04.257 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-11-01 12:08:04.257601 | orchestrator | 12:08:04.257 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-11-01 12:08:04.257711 | orchestrator | 12:08:04.257 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-11-01 12:08:04.257890 | orchestrator | 12:08:04.257 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-11-01 12:08:04.257967 | orchestrator | 12:08:04.257 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-11-01 12:08:04.258040 | orchestrator | 12:08:04.257 STDOUT terraform: you run "tofu init" in the future. 2025-11-01 12:08:04.258127 | orchestrator | 12:08:04.258 STDOUT terraform: OpenTofu has been successfully initialized! 2025-11-01 12:08:04.258235 | orchestrator | 12:08:04.258 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-11-01 12:08:04.258364 | orchestrator | 12:08:04.258 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-11-01 12:08:04.258454 | orchestrator | 12:08:04.258 STDOUT terraform: should now work. 2025-11-01 12:08:04.258560 | orchestrator | 12:08:04.258 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-11-01 12:08:04.258675 | orchestrator | 12:08:04.258 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-11-01 12:08:04.258766 | orchestrator | 12:08:04.258 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-11-01 12:08:04.576722 | orchestrator | 12:08:04.576 STDOUT terraform: Created and switched to workspace "ci"! 2025-11-01 12:08:04.576802 | orchestrator | 12:08:04.576 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-11-01 12:08:04.577013 | orchestrator | 12:08:04.576 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-11-01 12:08:04.577030 | orchestrator | 12:08:04.576 STDOUT terraform: for this configuration. 2025-11-01 12:08:04.807567 | orchestrator | 12:08:04.806 STDOUT terraform: ci.auto.tfvars 2025-11-01 12:08:04.812778 | orchestrator | 12:08:04.812 STDOUT terraform: default_custom.tf 2025-11-01 12:08:05.802088 | orchestrator | 12:08:05.801 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-11-01 12:08:06.303657 | orchestrator | 12:08:06.303 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-11-01 12:08:06.530161 | orchestrator | 12:08:06.527 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-11-01 12:08:06.530241 | orchestrator | 12:08:06.527 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-11-01 12:08:06.530247 | orchestrator | 12:08:06.527 STDOUT terraform:  + create 2025-11-01 12:08:06.530253 | orchestrator | 12:08:06.527 STDOUT terraform:  <= read (data resources) 2025-11-01 12:08:06.530259 | orchestrator | 12:08:06.527 STDOUT terraform: OpenTofu will perform the following actions: 2025-11-01 12:08:06.530263 | orchestrator | 12:08:06.527 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-11-01 12:08:06.530276 | orchestrator | 12:08:06.527 STDOUT terraform:  # (config refers to values not yet known) 2025-11-01 12:08:06.530281 | orchestrator | 12:08:06.527 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-11-01 12:08:06.530285 | orchestrator | 12:08:06.527 STDOUT terraform:  + checksum = (known after apply) 2025-11-01 12:08:06.530289 | orchestrator | 12:08:06.527 STDOUT terraform:  + created_at = (known after apply) 2025-11-01 12:08:06.530293 | orchestrator | 12:08:06.527 STDOUT terraform:  + file = (known after apply) 2025-11-01 12:08:06.530297 | orchestrator | 12:08:06.527 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.530301 | orchestrator | 12:08:06.528 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 12:08:06.530305 | orchestrator | 12:08:06.528 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-11-01 12:08:06.530309 | orchestrator | 12:08:06.528 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-11-01 12:08:06.530312 | orchestrator | 12:08:06.528 STDOUT terraform:  + most_recent = true 2025-11-01 12:08:06.530316 | orchestrator | 12:08:06.528 STDOUT terraform:  + name = (known after apply) 2025-11-01 12:08:06.530320 | orchestrator | 12:08:06.528 STDOUT terraform:  + protected = (known after apply) 2025-11-01 12:08:06.530324 | orchestrator | 12:08:06.528 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.530328 | orchestrator | 12:08:06.528 STDOUT terraform:  + schema = (known after apply) 2025-11-01 12:08:06.530331 | orchestrator | 12:08:06.528 STDOUT terraform:  + size_bytes = (known after apply) 2025-11-01 12:08:06.530335 | orchestrator | 12:08:06.528 STDOUT terraform:  + tags = (known after apply) 2025-11-01 12:08:06.530339 | orchestrator | 12:08:06.528 STDOUT terraform:  + updated_at = (known after apply) 2025-11-01 12:08:06.530383 | orchestrator | 12:08:06.528 STDOUT terraform:  } 2025-11-01 12:08:06.530389 | orchestrator | 12:08:06.528 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-11-01 12:08:06.530393 | orchestrator | 12:08:06.528 STDOUT terraform:  # (config refers to values not yet known) 2025-11-01 12:08:06.530397 | orchestrator | 12:08:06.528 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-11-01 12:08:06.530401 | orchestrator | 12:08:06.528 STDOUT terraform:  + checksum = (known after apply) 2025-11-01 12:08:06.530405 | orchestrator | 12:08:06.528 STDOUT terraform:  + created_at = (known after apply) 2025-11-01 12:08:06.530408 | orchestrator | 12:08:06.528 STDOUT terraform:  + file = (known after apply) 2025-11-01 12:08:06.530415 | orchestrator | 12:08:06.528 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.530419 | orchestrator | 12:08:06.528 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 12:08:06.530423 | orchestrator | 12:08:06.528 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-11-01 12:08:06.530426 | orchestrator | 12:08:06.528 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-11-01 12:08:06.530430 | orchestrator | 12:08:06.528 STDOUT terraform:  + most_recent = true 2025-11-01 12:08:06.530434 | orchestrator | 12:08:06.528 STDOUT terraform:  + name = (known after apply) 2025-11-01 12:08:06.530438 | orchestrator | 12:08:06.528 STDOUT terraform:  + protected = (known after apply) 2025-11-01 12:08:06.530441 | orchestrator | 12:08:06.528 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.530458 | orchestrator | 12:08:06.528 STDOUT terraform:  + schema = (known after apply) 2025-11-01 12:08:06.530462 | orchestrator | 12:08:06.528 STDOUT terraform:  + size_bytes = (known after apply) 2025-11-01 12:08:06.530466 | orchestrator | 12:08:06.528 STDOUT terraform:  + tags = (known after apply) 2025-11-01 12:08:06.530470 | orchestrator | 12:08:06.528 STDOUT terraform:  + updated_at = (known after apply) 2025-11-01 12:08:06.530474 | orchestrator | 12:08:06.528 STDOUT terraform:  } 2025-11-01 12:08:06.530477 | orchestrator | 12:08:06.528 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-11-01 12:08:06.530481 | orchestrator | 12:08:06.528 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-11-01 12:08:06.530485 | orchestrator | 12:08:06.528 STDOUT terraform:  + content = (known after apply) 2025-11-01 12:08:06.530491 | orchestrator | 12:08:06.528 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-11-01 12:08:06.530495 | orchestrator | 12:08:06.528 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-11-01 12:08:06.530499 | orchestrator | 12:08:06.528 STDOUT terraform:  + content_md5 = (known after apply) 2025-11-01 12:08:06.530503 | orchestrator | 12:08:06.528 STDOUT terraform:  + content_sha1 = (known after apply) 2025-11-01 12:08:06.530506 | orchestrator | 12:08:06.528 STDOUT terraform:  + content_sha256 = (known after apply) 2025-11-01 12:08:06.530510 | orchestrator | 12:08:06.529 STDOUT terraform:  + content_sha512 = (known after apply) 2025-11-01 12:08:06.530514 | orchestrator | 12:08:06.529 STDOUT terraform:  + directory_permission = "0777" 2025-11-01 12:08:06.530521 | orchestrator | 12:08:06.529 STDOUT terraform:  + file_permission = "0644" 2025-11-01 12:08:06.530525 | orchestrator | 12:08:06.529 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-11-01 12:08:06.530529 | orchestrator | 12:08:06.529 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.530532 | orchestrator | 12:08:06.529 STDOUT terraform:  } 2025-11-01 12:08:06.530537 | orchestrator | 12:08:06.529 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-11-01 12:08:06.530541 | orchestrator | 12:08:06.529 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-11-01 12:08:06.530544 | orchestrator | 12:08:06.529 STDOUT terraform:  + content = (known after apply) 2025-11-01 12:08:06.530548 | orchestrator | 12:08:06.529 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-11-01 12:08:06.530552 | orchestrator | 12:08:06.529 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-11-01 12:08:06.530556 | orchestrator | 12:08:06.529 STDOUT terraform:  + content_md5 = (known after apply) 2025-11-01 12:08:06.530559 | orchestrator | 12:08:06.529 STDOUT terraform:  + content_sha1 = (known after apply) 2025-11-01 12:08:06.530563 | orchestrator | 12:08:06.529 STDOUT terraform:  + content_sha256 = (known after apply) 2025-11-01 12:08:06.530567 | orchestrator | 12:08:06.529 STDOUT terraform:  + content_sha512 = (known after apply) 2025-11-01 12:08:06.530570 | orchestrator | 12:08:06.529 STDOUT terraform:  + directory_permission = "0777" 2025-11-01 12:08:06.530574 | orchestrator | 12:08:06.529 STDOUT terraform:  + file_permission = "0644" 2025-11-01 12:08:06.530578 | orchestrator | 12:08:06.529 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-11-01 12:08:06.530582 | orchestrator | 12:08:06.529 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.530586 | orchestrator | 12:08:06.529 STDOUT terraform:  } 2025-11-01 12:08:06.530589 | orchestrator | 12:08:06.529 STDOUT terraform:  # local_file.inventory will be created 2025-11-01 12:08:06.530593 | orchestrator | 12:08:06.529 STDOUT terraform:  + resource "local_file" "inventory" { 2025-11-01 12:08:06.530597 | orchestrator | 12:08:06.529 STDOUT terraform:  + content = (known after apply) 2025-11-01 12:08:06.530601 | orchestrator | 12:08:06.529 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-11-01 12:08:06.530604 | orchestrator | 12:08:06.529 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-11-01 12:08:06.530610 | orchestrator | 12:08:06.529 STDOUT terraform:  + content_md5 = (known after apply) 2025-11-01 12:08:06.530614 | orchestrator | 12:08:06.529 STDOUT terraform:  + content_sha1 = (known after apply) 2025-11-01 12:08:06.530618 | orchestrator | 12:08:06.529 STDOUT terraform:  + content_sha256 = (known after apply) 2025-11-01 12:08:06.530622 | orchestrator | 12:08:06.529 STDOUT terraform:  + content_sha512 = (known after apply) 2025-11-01 12:08:06.530625 | orchestrator | 12:08:06.529 STDOUT terraform:  + directory_permission = "0777" 2025-11-01 12:08:06.530629 | orchestrator | 12:08:06.529 STDOUT terraform:  + file_permission = "0644" 2025-11-01 12:08:06.530633 | orchestrator | 12:08:06.529 STDOUT terraform:  + filename = "inventory.ci" 2025-11-01 12:08:06.530641 | orchestrator | 12:08:06.529 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.530645 | orchestrator | 12:08:06.529 STDOUT terraform:  } 2025-11-01 12:08:06.530649 | orchestrator | 12:08:06.529 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-11-01 12:08:06.530653 | orchestrator | 12:08:06.529 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-11-01 12:08:06.530657 | orchestrator | 12:08:06.529 STDOUT terraform:  + content = (sensitive value) 2025-11-01 12:08:06.530660 | orchestrator | 12:08:06.529 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-11-01 12:08:06.534143 | orchestrator | 12:08:06.530 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-11-01 12:08:06.534222 | orchestrator | 12:08:06.530 STDOUT terraform:  + content_md5 = (known after apply) 2025-11-01 12:08:06.534230 | orchestrator | 12:08:06.530 STDOUT terraform:  + content_sha1 = (known after apply) 2025-11-01 12:08:06.534236 | orchestrator | 12:08:06.530 STDOUT terraform:  + content_sha256 = (known after apply) 2025-11-01 12:08:06.534245 | orchestrator | 12:08:06.530 STDOUT terraform:  + content_sha512 = (known after apply) 2025-11-01 12:08:06.534251 | orchestrator | 12:08:06.530 STDOUT terraform:  + directory_permission = "0700" 2025-11-01 12:08:06.534258 | orchestrator | 12:08:06.530 STDOUT terraform:  + file_permission = "0600" 2025-11-01 12:08:06.534263 | orchestrator | 12:08:06.530 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-11-01 12:08:06.534269 | orchestrator | 12:08:06.530 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.534274 | orchestrator | 12:08:06.530 STDOUT terraform:  } 2025-11-01 12:08:06.534281 | orchestrator | 12:08:06.530 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-11-01 12:08:06.534287 | orchestrator | 12:08:06.531 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-11-01 12:08:06.534292 | orchestrator | 12:08:06.531 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.534298 | orchestrator | 12:08:06.531 STDOUT terraform:  } 2025-11-01 12:08:06.534303 | orchestrator | 12:08:06.531 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-11-01 12:08:06.534310 | orchestrator | 12:08:06.531 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-11-01 12:08:06.534315 | orchestrator | 12:08:06.531 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 12:08:06.534321 | orchestrator | 12:08:06.531 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.534327 | orchestrator | 12:08:06.531 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.534332 | orchestrator | 12:08:06.531 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 12:08:06.534338 | orchestrator | 12:08:06.531 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 12:08:06.534359 | orchestrator | 12:08:06.531 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-11-01 12:08:06.534365 | orchestrator | 12:08:06.531 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.534371 | orchestrator | 12:08:06.531 STDOUT terraform:  + size = 80 2025-11-01 12:08:06.534390 | orchestrator | 12:08:06.531 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 12:08:06.534396 | orchestrator | 12:08:06.531 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 12:08:06.534402 | orchestrator | 12:08:06.531 STDOUT terraform:  } 2025-11-01 12:08:06.534415 | orchestrator | 12:08:06.531 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-11-01 12:08:06.534421 | orchestrator | 12:08:06.531 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-01 12:08:06.534427 | orchestrator | 12:08:06.531 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 12:08:06.534433 | orchestrator | 12:08:06.531 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.534439 | orchestrator | 12:08:06.531 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.534445 | orchestrator | 12:08:06.531 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 12:08:06.534451 | orchestrator | 12:08:06.531 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 12:08:06.534456 | orchestrator | 12:08:06.531 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-11-01 12:08:06.534462 | orchestrator | 12:08:06.531 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.534468 | orchestrator | 12:08:06.531 STDOUT terraform:  + size = 80 2025-11-01 12:08:06.534485 | orchestrator | 12:08:06.531 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 12:08:06.534491 | orchestrator | 12:08:06.531 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 12:08:06.534497 | orchestrator | 12:08:06.531 STDOUT terraform:  } 2025-11-01 12:08:06.534503 | orchestrator | 12:08:06.531 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-11-01 12:08:06.534509 | orchestrator | 12:08:06.531 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-01 12:08:06.534515 | orchestrator | 12:08:06.531 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 12:08:06.534520 | orchestrator | 12:08:06.531 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.534526 | orchestrator | 12:08:06.531 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.534532 | orchestrator | 12:08:06.531 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 12:08:06.534538 | orchestrator | 12:08:06.532 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 12:08:06.534543 | orchestrator | 12:08:06.532 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-11-01 12:08:06.534549 | orchestrator | 12:08:06.532 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.534555 | orchestrator | 12:08:06.532 STDOUT terraform:  + size = 80 2025-11-01 12:08:06.534561 | orchestrator | 12:08:06.532 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 12:08:06.534566 | orchestrator | 12:08:06.532 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 12:08:06.534572 | orchestrator | 12:08:06.532 STDOUT terraform:  } 2025-11-01 12:08:06.534578 | orchestrator | 12:08:06.532 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-11-01 12:08:06.534588 | orchestrator | 12:08:06.532 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-01 12:08:06.534594 | orchestrator | 12:08:06.532 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 12:08:06.534600 | orchestrator | 12:08:06.532 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.534606 | orchestrator | 12:08:06.532 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.534611 | orchestrator | 12:08:06.532 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 12:08:06.534617 | orchestrator | 12:08:06.532 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 12:08:06.534623 | orchestrator | 12:08:06.532 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-11-01 12:08:06.534628 | orchestrator | 12:08:06.532 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.534634 | orchestrator | 12:08:06.532 STDOUT terraform:  + size = 80 2025-11-01 12:08:06.534640 | orchestrator | 12:08:06.532 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 12:08:06.534645 | orchestrator | 12:08:06.532 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 12:08:06.534651 | orchestrator | 12:08:06.532 STDOUT terraform:  } 2025-11-01 12:08:06.534657 | orchestrator | 12:08:06.532 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-11-01 12:08:06.534663 | orchestrator | 12:08:06.532 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-01 12:08:06.534672 | orchestrator | 12:08:06.532 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 12:08:06.534677 | orchestrator | 12:08:06.532 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.534683 | orchestrator | 12:08:06.532 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.534689 | orchestrator | 12:08:06.532 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 12:08:06.534695 | orchestrator | 12:08:06.532 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 12:08:06.534704 | orchestrator | 12:08:06.532 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-11-01 12:08:06.534710 | orchestrator | 12:08:06.532 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.534716 | orchestrator | 12:08:06.532 STDOUT terraform:  + size = 80 2025-11-01 12:08:06.534722 | orchestrator | 12:08:06.532 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 12:08:06.534728 | orchestrator | 12:08:06.532 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 12:08:06.534733 | orchestrator | 12:08:06.532 STDOUT terraform:  } 2025-11-01 12:08:06.534739 | orchestrator | 12:08:06.532 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-11-01 12:08:06.534745 | orchestrator | 12:08:06.532 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-01 12:08:06.534751 | orchestrator | 12:08:06.532 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 12:08:06.534760 | orchestrator | 12:08:06.532 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.534766 | orchestrator | 12:08:06.533 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.534771 | orchestrator | 12:08:06.533 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 12:08:06.534777 | orchestrator | 12:08:06.533 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 12:08:06.534783 | orchestrator | 12:08:06.533 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-11-01 12:08:06.534788 | orchestrator | 12:08:06.533 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.534794 | orchestrator | 12:08:06.533 STDOUT terraform:  + size = 80 2025-11-01 12:08:06.534800 | orchestrator | 12:08:06.533 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 12:08:06.534806 | orchestrator | 12:08:06.533 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 12:08:06.534812 | orchestrator | 12:08:06.533 STDOUT terraform:  } 2025-11-01 12:08:06.534817 | orchestrator | 12:08:06.533 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-11-01 12:08:06.534826 | orchestrator | 12:08:06.533 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-11-01 12:08:06.534832 | orchestrator | 12:08:06.533 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 12:08:06.534838 | orchestrator | 12:08:06.533 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.534844 | orchestrator | 12:08:06.533 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.534849 | orchestrator | 12:08:06.533 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 12:08:06.534855 | orchestrator | 12:08:06.533 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 12:08:06.534861 | orchestrator | 12:08:06.533 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-11-01 12:08:06.534866 | orchestrator | 12:08:06.533 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.534872 | orchestrator | 12:08:06.533 STDOUT terraform:  + size = 80 2025-11-01 12:08:06.534878 | orchestrator | 12:08:06.533 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 12:08:06.534884 | orchestrator | 12:08:06.533 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 12:08:06.534890 | orchestrator | 12:08:06.533 STDOUT terraform:  } 2025-11-01 12:08:06.534895 | orchestrator | 12:08:06.533 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-11-01 12:08:06.534901 | orchestrator | 12:08:06.533 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-01 12:08:06.534907 | orchestrator | 12:08:06.533 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 12:08:06.534913 | orchestrator | 12:08:06.533 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.534918 | orchestrator | 12:08:06.533 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.534924 | orchestrator | 12:08:06.533 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 12:08:06.534933 | orchestrator | 12:08:06.533 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-11-01 12:08:06.534943 | orchestrator | 12:08:06.533 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.534949 | orchestrator | 12:08:06.533 STDOUT terraform:  + size = 20 2025-11-01 12:08:06.534954 | orchestrator | 12:08:06.533 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 12:08:06.534960 | orchestrator | 12:08:06.533 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 12:08:06.534966 | orchestrator | 12:08:06.533 STDOUT terraform:  } 2025-11-01 12:08:06.534972 | orchestrator | 12:08:06.533 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-11-01 12:08:06.534978 | orchestrator | 12:08:06.533 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-01 12:08:06.534983 | orchestrator | 12:08:06.533 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 12:08:06.534989 | orchestrator | 12:08:06.533 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.534998 | orchestrator | 12:08:06.534 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.535007 | orchestrator | 12:08:06.534 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 12:08:06.535134 | orchestrator | 12:08:06.534 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-11-01 12:08:06.535161 | orchestrator | 12:08:06.535 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.535171 | orchestrator | 12:08:06.535 STDOUT terraform:  + size = 20 2025-11-01 12:08:06.535224 | orchestrator | 12:08:06.535 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 12:08:06.535232 | orchestrator | 12:08:06.535 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 12:08:06.535239 | orchestrator | 12:08:06.535 STDOUT terraform:  } 2025-11-01 12:08:06.535271 | orchestrator | 12:08:06.535 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-11-01 12:08:06.535317 | orchestrator | 12:08:06.535 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-01 12:08:06.535376 | orchestrator | 12:08:06.535 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 12:08:06.535387 | orchestrator | 12:08:06.535 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.535413 | orchestrator | 12:08:06.535 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.535448 | orchestrator | 12:08:06.535 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 12:08:06.535499 | orchestrator | 12:08:06.535 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-11-01 12:08:06.535510 | orchestrator | 12:08:06.535 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.535536 | orchestrator | 12:08:06.535 STDOUT terraform:  + size = 20 2025-11-01 12:08:06.535588 | orchestrator | 12:08:06.535 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 12:08:06.535595 | orchestrator | 12:08:06.535 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 12:08:06.535601 | orchestrator | 12:08:06.535 STDOUT terraform:  } 2025-11-01 12:08:06.535658 | orchestrator | 12:08:06.535 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-11-01 12:08:06.535674 | orchestrator | 12:08:06.535 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-01 12:08:06.535710 | orchestrator | 12:08:06.535 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 12:08:06.535719 | orchestrator | 12:08:06.535 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.535790 | orchestrator | 12:08:06.535 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.535799 | orchestrator | 12:08:06.535 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 12:08:06.535828 | orchestrator | 12:08:06.535 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-11-01 12:08:06.535878 | orchestrator | 12:08:06.535 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.535886 | orchestrator | 12:08:06.535 STDOUT terraform:  + size = 20 2025-11-01 12:08:06.535894 | orchestrator | 12:08:06.535 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 12:08:06.535922 | orchestrator | 12:08:06.535 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 12:08:06.535931 | orchestrator | 12:08:06.535 STDOUT terraform:  } 2025-11-01 12:08:06.535980 | orchestrator | 12:08:06.535 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-11-01 12:08:06.536019 | orchestrator | 12:08:06.535 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-01 12:08:06.536055 | orchestrator | 12:08:06.536 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 12:08:06.536079 | orchestrator | 12:08:06.536 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.536123 | orchestrator | 12:08:06.536 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.536148 | orchestrator | 12:08:06.536 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 12:08:06.536185 | orchestrator | 12:08:06.536 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-11-01 12:08:06.536233 | orchestrator | 12:08:06.536 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.536241 | orchestrator | 12:08:06.536 STDOUT terraform:  + size = 20 2025-11-01 12:08:06.536250 | orchestrator | 12:08:06.536 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 12:08:06.536278 | orchestrator | 12:08:06.536 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 12:08:06.536288 | orchestrator | 12:08:06.536 STDOUT terraform:  } 2025-11-01 12:08:06.536332 | orchestrator | 12:08:06.536 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-11-01 12:08:06.536402 | orchestrator | 12:08:06.536 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-01 12:08:06.536414 | orchestrator | 12:08:06.536 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 12:08:06.536438 | orchestrator | 12:08:06.536 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.536489 | orchestrator | 12:08:06.536 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.536604 | orchestrator | 12:08:06.536 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 12:08:06.536620 | orchestrator | 12:08:06.536 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-11-01 12:08:06.536815 | orchestrator | 12:08:06.536 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.536824 | orchestrator | 12:08:06.536 STDOUT terraform:  + size = 20 2025-11-01 12:08:06.536833 | orchestrator | 12:08:06.536 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 12:08:06.536861 | orchestrator | 12:08:06.536 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 12:08:06.536883 | orchestrator | 12:08:06.536 STDOUT terraform:  } 2025-11-01 12:08:06.537072 | orchestrator | 12:08:06.536 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-11-01 12:08:06.537160 | orchestrator | 12:08:06.537 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-01 12:08:06.537202 | orchestrator | 12:08:06.537 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 12:08:06.537353 | orchestrator | 12:08:06.537 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.537514 | orchestrator | 12:08:06.537 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.537623 | orchestrator | 12:08:06.537 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 12:08:06.537744 | orchestrator | 12:08:06.537 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-11-01 12:08:06.537772 | orchestrator | 12:08:06.537 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.537897 | orchestrator | 12:08:06.537 STDOUT terraform:  + size = 20 2025-11-01 12:08:06.537905 | orchestrator | 12:08:06.537 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 12:08:06.537914 | orchestrator | 12:08:06.537 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 12:08:06.537922 | orchestrator | 12:08:06.537 STDOUT terraform:  } 2025-11-01 12:08:06.538063 | orchestrator | 12:08:06.537 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-11-01 12:08:06.538141 | orchestrator | 12:08:06.538 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-01 12:08:06.538175 | orchestrator | 12:08:06.538 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 12:08:06.538245 | orchestrator | 12:08:06.538 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.538406 | orchestrator | 12:08:06.538 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.538417 | orchestrator | 12:08:06.538 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 12:08:06.538425 | orchestrator | 12:08:06.538 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-11-01 12:08:06.538516 | orchestrator | 12:08:06.538 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.538525 | orchestrator | 12:08:06.538 STDOUT terraform:  + size = 20 2025-11-01 12:08:06.538622 | orchestrator | 12:08:06.538 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 12:08:06.538631 | orchestrator | 12:08:06.538 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 12:08:06.538639 | orchestrator | 12:08:06.538 STDOUT terraform:  } 2025-11-01 12:08:06.538708 | orchestrator | 12:08:06.538 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-11-01 12:08:06.538816 | orchestrator | 12:08:06.538 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-11-01 12:08:06.538968 | orchestrator | 12:08:06.538 STDOUT terraform:  + attachment = (known after apply) 2025-11-01 12:08:06.539043 | orchestrator | 12:08:06.538 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.539099 | orchestrator | 12:08:06.539 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.539172 | orchestrator | 12:08:06.539 STDOUT terraform:  + metadata = (known after apply) 2025-11-01 12:08:06.539295 | orchestrator | 12:08:06.539 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-11-01 12:08:06.539449 | orchestrator | 12:08:06.539 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.539459 | orchestrator | 12:08:06.539 STDOUT terraform:  + size = 20 2025-11-01 12:08:06.539488 | orchestrator | 12:08:06.539 STDOUT terraform:  + volume_retype_policy = "never" 2025-11-01 12:08:06.539570 | orchestrator | 12:08:06.539 STDOUT terraform:  + volume_type = "ssd" 2025-11-01 12:08:06.539578 | orchestrator | 12:08:06.539 STDOUT terraform:  } 2025-11-01 12:08:06.539616 | orchestrator | 12:08:06.539 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-11-01 12:08:06.539817 | orchestrator | 12:08:06.539 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-11-01 12:08:06.539960 | orchestrator | 12:08:06.539 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-01 12:08:06.540067 | orchestrator | 12:08:06.539 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-01 12:08:06.540170 | orchestrator | 12:08:06.539 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-01 12:08:06.540384 | orchestrator | 12:08:06.540 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 12:08:06.540598 | orchestrator | 12:08:06.540 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.540621 | orchestrator | 12:08:06.540 STDOUT terraform:  + config_drive = true 2025-11-01 12:08:06.540659 | orchestrator | 12:08:06.540 STDOUT terraform:  + created = (known after apply) 2025-11-01 12:08:06.540694 | orchestrator | 12:08:06.540 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-01 12:08:06.540724 | orchestrator | 12:08:06.540 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-11-01 12:08:06.540750 | orchestrator | 12:08:06.540 STDOUT terraform:  + force_delete = false 2025-11-01 12:08:06.540784 | orchestrator | 12:08:06.540 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-01 12:08:06.540818 | orchestrator | 12:08:06.540 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.540853 | orchestrator | 12:08:06.540 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 12:08:06.540888 | orchestrator | 12:08:06.540 STDOUT terraform:  + image_name = (known after apply) 2025-11-01 12:08:06.540916 | orchestrator | 12:08:06.540 STDOUT terraform:  + key_pair = "testbed" 2025-11-01 12:08:06.540945 | orchestrator | 12:08:06.540 STDOUT terraform:  + name = "testbed-manager" 2025-11-01 12:08:06.540967 | orchestrator | 12:08:06.540 STDOUT terraform:  + power_state = "active" 2025-11-01 12:08:06.541004 | orchestrator | 12:08:06.540 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.541038 | orchestrator | 12:08:06.540 STDOUT terraform:  + security_groups = (known after apply) 2025-11-01 12:08:06.541060 | orchestrator | 12:08:06.541 STDOUT terraform:  + stop_before_destroy = false 2025-11-01 12:08:06.541101 | orchestrator | 12:08:06.541 STDOUT terraform:  + updated = (known after apply) 2025-11-01 12:08:06.541131 | orchestrator | 12:08:06.541 STDOUT terraform:  + user_data = (sensitive value) 2025-11-01 12:08:06.541139 | orchestrator | 12:08:06.541 STDOUT terraform:  + block_device { 2025-11-01 12:08:06.541170 | orchestrator | 12:08:06.541 STDOUT terraform:  + boot_index = 0 2025-11-01 12:08:06.541196 | orchestrator | 12:08:06.541 STDOUT terraform:  + delete_on_termination = false 2025-11-01 12:08:06.541225 | orchestrator | 12:08:06.541 STDOUT terraform:  + destination_type = "volume" 2025-11-01 12:08:06.541252 | orchestrator | 12:08:06.541 STDOUT terraform:  + multiattach = false 2025-11-01 12:08:06.541280 | orchestrator | 12:08:06.541 STDOUT terraform:  + source_type = "volume" 2025-11-01 12:08:06.541317 | orchestrator | 12:08:06.541 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 12:08:06.541324 | orchestrator | 12:08:06.541 STDOUT terraform:  } 2025-11-01 12:08:06.541330 | orchestrator | 12:08:06.541 STDOUT terraform:  + network { 2025-11-01 12:08:06.541370 | orchestrator | 12:08:06.541 STDOUT terraform:  + access_network = false 2025-11-01 12:08:06.541402 | orchestrator | 12:08:06.541 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-01 12:08:06.541433 | orchestrator | 12:08:06.541 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-01 12:08:06.541464 | orchestrator | 12:08:06.541 STDOUT terraform:  + mac = (known after apply) 2025-11-01 12:08:06.541495 | orchestrator | 12:08:06.541 STDOUT terraform:  + name = (known after apply) 2025-11-01 12:08:06.541525 | orchestrator | 12:08:06.541 STDOUT terraform:  + port = (known after apply) 2025-11-01 12:08:06.541556 | orchestrator | 12:08:06.541 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 12:08:06.541563 | orchestrator | 12:08:06.541 STDOUT terraform:  } 2025-11-01 12:08:06.541570 | orchestrator | 12:08:06.541 STDOUT terraform:  } 2025-11-01 12:08:06.541618 | orchestrator | 12:08:06.541 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-11-01 12:08:06.541658 | orchestrator | 12:08:06.541 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-01 12:08:06.541693 | orchestrator | 12:08:06.541 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-01 12:08:06.541727 | orchestrator | 12:08:06.541 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-01 12:08:06.541761 | orchestrator | 12:08:06.541 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-01 12:08:06.541797 | orchestrator | 12:08:06.541 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 12:08:06.541817 | orchestrator | 12:08:06.541 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.541836 | orchestrator | 12:08:06.541 STDOUT terraform:  + config_drive = true 2025-11-01 12:08:06.541869 | orchestrator | 12:08:06.541 STDOUT terraform:  + created = (known after apply) 2025-11-01 12:08:06.541903 | orchestrator | 12:08:06.541 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-01 12:08:06.541933 | orchestrator | 12:08:06.541 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-01 12:08:06.541952 | orchestrator | 12:08:06.541 STDOUT terraform:  + force_delete = false 2025-11-01 12:08:06.541986 | orchestrator | 12:08:06.541 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-01 12:08:06.542021 | orchestrator | 12:08:06.541 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.542074 | orchestrator | 12:08:06.542 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 12:08:06.542107 | orchestrator | 12:08:06.542 STDOUT terraform:  + image_name = (known after apply) 2025-11-01 12:08:06.542131 | orchestrator | 12:08:06.542 STDOUT terraform:  + key_pair = "testbed" 2025-11-01 12:08:06.542161 | orchestrator | 12:08:06.542 STDOUT terraform:  + name = "testbed-node-0" 2025-11-01 12:08:06.542187 | orchestrator | 12:08:06.542 STDOUT terraform:  + power_state = "active" 2025-11-01 12:08:06.542222 | orchestrator | 12:08:06.542 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.542261 | orchestrator | 12:08:06.542 STDOUT terraform:  + security_groups = (known after apply) 2025-11-01 12:08:06.542282 | orchestrator | 12:08:06.542 STDOUT terraform:  + stop_before_destroy = false 2025-11-01 12:08:06.542316 | orchestrator | 12:08:06.542 STDOUT terraform:  + updated = (known after apply) 2025-11-01 12:08:06.542418 | orchestrator | 12:08:06.542 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-01 12:08:06.542426 | orchestrator | 12:08:06.542 STDOUT terraform:  + block_device { 2025-11-01 12:08:06.542433 | orchestrator | 12:08:06.542 STDOUT terraform:  + boot_index = 0 2025-11-01 12:08:06.542454 | orchestrator | 12:08:06.542 STDOUT terraform:  + delete_on_termination = false 2025-11-01 12:08:06.542485 | orchestrator | 12:08:06.542 STDOUT terraform:  + destination_type = "volume" 2025-11-01 12:08:06.542523 | orchestrator | 12:08:06.542 STDOUT terraform:  + multiattach = false 2025-11-01 12:08:06.542553 | orchestrator | 12:08:06.542 STDOUT terraform:  + source_type = "volume" 2025-11-01 12:08:06.542593 | orchestrator | 12:08:06.542 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 12:08:06.542600 | orchestrator | 12:08:06.542 STDOUT terraform:  } 2025-11-01 12:08:06.542619 | orchestrator | 12:08:06.542 STDOUT terraform:  + network { 2025-11-01 12:08:06.542643 | orchestrator | 12:08:06.542 STDOUT terraform:  + access_network = false 2025-11-01 12:08:06.542674 | orchestrator | 12:08:06.542 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-01 12:08:06.542704 | orchestrator | 12:08:06.542 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-01 12:08:06.542736 | orchestrator | 12:08:06.542 STDOUT terraform:  + mac = (known after apply) 2025-11-01 12:08:06.542767 | orchestrator | 12:08:06.542 STDOUT terraform:  + name = (known after apply) 2025-11-01 12:08:06.542798 | orchestrator | 12:08:06.542 STDOUT terraform:  + port = (known after apply) 2025-11-01 12:08:06.542828 | orchestrator | 12:08:06.542 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 12:08:06.542834 | orchestrator | 12:08:06.542 STDOUT terraform:  } 2025-11-01 12:08:06.542852 | orchestrator | 12:08:06.542 STDOUT terraform:  } 2025-11-01 12:08:06.542896 | orchestrator | 12:08:06.542 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-11-01 12:08:06.542936 | orchestrator | 12:08:06.542 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-01 12:08:06.542971 | orchestrator | 12:08:06.542 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-01 12:08:06.543005 | orchestrator | 12:08:06.542 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-01 12:08:06.543039 | orchestrator | 12:08:06.543 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-01 12:08:06.543074 | orchestrator | 12:08:06.543 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 12:08:06.543099 | orchestrator | 12:08:06.543 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.543118 | orchestrator | 12:08:06.543 STDOUT terraform:  + config_drive = true 2025-11-01 12:08:06.543152 | orchestrator | 12:08:06.543 STDOUT terraform:  + created = (known after apply) 2025-11-01 12:08:06.543186 | orchestrator | 12:08:06.543 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-01 12:08:06.543214 | orchestrator | 12:08:06.543 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-01 12:08:06.543238 | orchestrator | 12:08:06.543 STDOUT terraform:  + force_delete = false 2025-11-01 12:08:06.543270 | orchestrator | 12:08:06.543 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-01 12:08:06.543306 | orchestrator | 12:08:06.543 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.543338 | orchestrator | 12:08:06.543 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 12:08:06.543386 | orchestrator | 12:08:06.543 STDOUT terraform:  + image_name = (known after apply) 2025-11-01 12:08:06.543410 | orchestrator | 12:08:06.543 STDOUT terraform:  + key_pair = "testbed" 2025-11-01 12:08:06.543441 | orchestrator | 12:08:06.543 STDOUT terraform:  + name = "testbed-node-1" 2025-11-01 12:08:06.543465 | orchestrator | 12:08:06.543 STDOUT terraform:  + power_state = "active" 2025-11-01 12:08:06.543499 | orchestrator | 12:08:06.543 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.543533 | orchestrator | 12:08:06.543 STDOUT terraform:  + security_groups = (known after apply) 2025-11-01 12:08:06.543556 | orchestrator | 12:08:06.543 STDOUT terraform:  + stop_before_destroy = false 2025-11-01 12:08:06.543590 | orchestrator | 12:08:06.543 STDOUT terraform:  + updated = (known after apply) 2025-11-01 12:08:06.543639 | orchestrator | 12:08:06.543 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-01 12:08:06.543657 | orchestrator | 12:08:06.543 STDOUT terraform:  + block_device { 2025-11-01 12:08:06.543679 | orchestrator | 12:08:06.543 STDOUT terraform:  + boot_index = 0 2025-11-01 12:08:06.543705 | orchestrator | 12:08:06.543 STDOUT terraform:  + delete_on_termination = false 2025-11-01 12:08:06.543736 | orchestrator | 12:08:06.543 STDOUT terraform:  + destination_type = "volume" 2025-11-01 12:08:06.543764 | orchestrator | 12:08:06.543 STDOUT terraform:  + multiattach = false 2025-11-01 12:08:06.543794 | orchestrator | 12:08:06.543 STDOUT terraform:  + source_type = "volume" 2025-11-01 12:08:06.543833 | orchestrator | 12:08:06.543 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 12:08:06.543840 | orchestrator | 12:08:06.543 STDOUT terraform:  } 2025-11-01 12:08:06.543857 | orchestrator | 12:08:06.543 STDOUT terraform:  + network { 2025-11-01 12:08:06.543878 | orchestrator | 12:08:06.543 STDOUT terraform:  + access_network = false 2025-11-01 12:08:06.543908 | orchestrator | 12:08:06.543 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-01 12:08:06.543939 | orchestrator | 12:08:06.543 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-01 12:08:06.543970 | orchestrator | 12:08:06.543 STDOUT terraform:  + mac = (known after apply) 2025-11-01 12:08:06.544000 | orchestrator | 12:08:06.543 STDOUT terraform:  + name = (known after apply) 2025-11-01 12:08:06.544032 | orchestrator | 12:08:06.543 STDOUT terraform:  + port = (known after apply) 2025-11-01 12:08:06.544059 | orchestrator | 12:08:06.544 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 12:08:06.544066 | orchestrator | 12:08:06.544 STDOUT terraform:  } 2025-11-01 12:08:06.544082 | orchestrator | 12:08:06.544 STDOUT terraform:  } 2025-11-01 12:08:06.544125 | orchestrator | 12:08:06.544 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-11-01 12:08:06.544165 | orchestrator | 12:08:06.544 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-01 12:08:06.544199 | orchestrator | 12:08:06.544 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-01 12:08:06.544233 | orchestrator | 12:08:06.544 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-01 12:08:06.544269 | orchestrator | 12:08:06.544 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-01 12:08:06.544303 | orchestrator | 12:08:06.544 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 12:08:06.544326 | orchestrator | 12:08:06.544 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.544356 | orchestrator | 12:08:06.544 STDOUT terraform:  + config_drive = true 2025-11-01 12:08:06.544392 | orchestrator | 12:08:06.544 STDOUT terraform:  + created = (known after apply) 2025-11-01 12:08:06.544427 | orchestrator | 12:08:06.544 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-01 12:08:06.544454 | orchestrator | 12:08:06.544 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-01 12:08:06.544479 | orchestrator | 12:08:06.544 STDOUT terraform:  + force_delete = false 2025-11-01 12:08:06.544512 | orchestrator | 12:08:06.544 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-01 12:08:06.544546 | orchestrator | 12:08:06.544 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.544581 | orchestrator | 12:08:06.544 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 12:08:06.544615 | orchestrator | 12:08:06.544 STDOUT terraform:  + image_name = (known after apply) 2025-11-01 12:08:06.544639 | orchestrator | 12:08:06.544 STDOUT terraform:  + key_pair = "testbed" 2025-11-01 12:08:06.544668 | orchestrator | 12:08:06.544 STDOUT terraform:  + name = "testbed-node-2" 2025-11-01 12:08:06.544693 | orchestrator | 12:08:06.544 STDOUT terraform:  + power_state = "active" 2025-11-01 12:08:06.544727 | orchestrator | 12:08:06.544 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.544760 | orchestrator | 12:08:06.544 STDOUT terraform:  + security_groups = (known after apply) 2025-11-01 12:08:06.544783 | orchestrator | 12:08:06.544 STDOUT terraform:  + stop_before_destroy = false 2025-11-01 12:08:06.544817 | orchestrator | 12:08:06.544 STDOUT terraform:  + updated = (known after apply) 2025-11-01 12:08:06.544865 | orchestrator | 12:08:06.544 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-01 12:08:06.544881 | orchestrator | 12:08:06.544 STDOUT terraform:  + block_device { 2025-11-01 12:08:06.544905 | orchestrator | 12:08:06.544 STDOUT terraform:  + boot_index = 0 2025-11-01 12:08:06.544932 | orchestrator | 12:08:06.544 STDOUT terraform:  + delete_on_termination = false 2025-11-01 12:08:06.544961 | orchestrator | 12:08:06.544 STDOUT terraform:  + destination_type = "volume" 2025-11-01 12:08:06.544993 | orchestrator | 12:08:06.544 STDOUT terraform:  + multiattach = false 2025-11-01 12:08:06.545021 | orchestrator | 12:08:06.544 STDOUT terraform:  + source_type = "volume" 2025-11-01 12:08:06.545069 | orchestrator | 12:08:06.545 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 12:08:06.545075 | orchestrator | 12:08:06.545 STDOUT terraform:  } 2025-11-01 12:08:06.545081 | orchestrator | 12:08:06.545 STDOUT terraform:  + network { 2025-11-01 12:08:06.545099 | orchestrator | 12:08:06.545 STDOUT terraform:  + access_network = false 2025-11-01 12:08:06.545129 | orchestrator | 12:08:06.545 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-01 12:08:06.545159 | orchestrator | 12:08:06.545 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-01 12:08:06.545190 | orchestrator | 12:08:06.545 STDOUT terraform:  + mac = (known after apply) 2025-11-01 12:08:06.545220 | orchestrator | 12:08:06.545 STDOUT terraform:  + name = (known after apply) 2025-11-01 12:08:06.545249 | orchestrator | 12:08:06.545 STDOUT terraform:  + port = (known after apply) 2025-11-01 12:08:06.545279 | orchestrator | 12:08:06.545 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 12:08:06.545286 | orchestrator | 12:08:06.545 STDOUT terraform:  } 2025-11-01 12:08:06.545303 | orchestrator | 12:08:06.545 STDOUT terraform:  } 2025-11-01 12:08:06.545357 | orchestrator | 12:08:06.545 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-11-01 12:08:06.545405 | orchestrator | 12:08:06.545 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-01 12:08:06.545434 | orchestrator | 12:08:06.545 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-01 12:08:06.545468 | orchestrator | 12:08:06.545 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-01 12:08:06.545500 | orchestrator | 12:08:06.545 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-01 12:08:06.545557 | orchestrator | 12:08:06.545 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 12:08:06.545564 | orchestrator | 12:08:06.545 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.545579 | orchestrator | 12:08:06.545 STDOUT terraform:  + config_drive = true 2025-11-01 12:08:06.545618 | orchestrator | 12:08:06.545 STDOUT terraform:  + created = (known after apply) 2025-11-01 12:08:06.545652 | orchestrator | 12:08:06.545 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-01 12:08:06.545680 | orchestrator | 12:08:06.545 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-01 12:08:06.545704 | orchestrator | 12:08:06.545 STDOUT terraform:  + force_delete = false 2025-11-01 12:08:06.545737 | orchestrator | 12:08:06.545 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-01 12:08:06.545773 | orchestrator | 12:08:06.545 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.545807 | orchestrator | 12:08:06.545 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 12:08:06.545842 | orchestrator | 12:08:06.545 STDOUT terraform:  + image_name = (known after apply) 2025-11-01 12:08:06.545865 | orchestrator | 12:08:06.545 STDOUT terraform:  + key_pair = "testbed" 2025-11-01 12:08:06.545896 | orchestrator | 12:08:06.545 STDOUT terraform:  + name = "testbed-node-3" 2025-11-01 12:08:06.545919 | orchestrator | 12:08:06.545 STDOUT terraform:  + power_state = "active" 2025-11-01 12:08:06.545955 | orchestrator | 12:08:06.545 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.545988 | orchestrator | 12:08:06.545 STDOUT terraform:  + security_groups = (known after apply) 2025-11-01 12:08:06.546034 | orchestrator | 12:08:06.545 STDOUT terraform:  + stop_before_destroy = false 2025-11-01 12:08:06.546142 | orchestrator | 12:08:06.546 STDOUT terraform:  + updated = (known after apply) 2025-11-01 12:08:06.546280 | orchestrator | 12:08:06.546 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-01 12:08:06.546289 | orchestrator | 12:08:06.546 STDOUT terraform:  + block_device { 2025-11-01 12:08:06.546314 | orchestrator | 12:08:06.546 STDOUT terraform:  + boot_index = 0 2025-11-01 12:08:06.546341 | orchestrator | 12:08:06.546 STDOUT terraform:  + delete_on_termination = false 2025-11-01 12:08:06.546392 | orchestrator | 12:08:06.546 STDOUT terraform:  + destination_type = "volume" 2025-11-01 12:08:06.546420 | orchestrator | 12:08:06.546 STDOUT terraform:  + multiattach = false 2025-11-01 12:08:06.546451 | orchestrator | 12:08:06.546 STDOUT terraform:  + source_type = "volume" 2025-11-01 12:08:06.546492 | orchestrator | 12:08:06.546 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 12:08:06.546498 | orchestrator | 12:08:06.546 STDOUT terraform:  } 2025-11-01 12:08:06.546514 | orchestrator | 12:08:06.546 STDOUT terraform:  + network { 2025-11-01 12:08:06.546535 | orchestrator | 12:08:06.546 STDOUT terraform:  + access_network = false 2025-11-01 12:08:06.546566 | orchestrator | 12:08:06.546 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-01 12:08:06.546597 | orchestrator | 12:08:06.546 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-01 12:08:06.546628 | orchestrator | 12:08:06.546 STDOUT terraform:  + mac = (known after apply) 2025-11-01 12:08:06.546659 | orchestrator | 12:08:06.546 STDOUT terraform:  + name = (known after apply) 2025-11-01 12:08:06.546689 | orchestrator | 12:08:06.546 STDOUT terraform:  + port = (known after apply) 2025-11-01 12:08:06.546720 | orchestrator | 12:08:06.546 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 12:08:06.546736 | orchestrator | 12:08:06.546 STDOUT terraform:  } 2025-11-01 12:08:06.546750 | orchestrator | 12:08:06.546 STDOUT terraform:  } 2025-11-01 12:08:06.546791 | orchestrator | 12:08:06.546 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-11-01 12:08:06.546834 | orchestrator | 12:08:06.546 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-01 12:08:06.546868 | orchestrator | 12:08:06.546 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-01 12:08:06.546901 | orchestrator | 12:08:06.546 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-01 12:08:06.546934 | orchestrator | 12:08:06.546 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-01 12:08:06.546970 | orchestrator | 12:08:06.546 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 12:08:06.546993 | orchestrator | 12:08:06.546 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.547013 | orchestrator | 12:08:06.546 STDOUT terraform:  + config_drive = true 2025-11-01 12:08:06.547046 | orchestrator | 12:08:06.547 STDOUT terraform:  + created = (known after apply) 2025-11-01 12:08:06.547082 | orchestrator | 12:08:06.547 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-01 12:08:06.547109 | orchestrator | 12:08:06.547 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-01 12:08:06.547133 | orchestrator | 12:08:06.547 STDOUT terraform:  + force_delete = false 2025-11-01 12:08:06.547166 | orchestrator | 12:08:06.547 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-01 12:08:06.547200 | orchestrator | 12:08:06.547 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.547235 | orchestrator | 12:08:06.547 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 12:08:06.547268 | orchestrator | 12:08:06.547 STDOUT terraform:  + image_name = (known after apply) 2025-11-01 12:08:06.547291 | orchestrator | 12:08:06.547 STDOUT terraform:  + key_pair = "testbed" 2025-11-01 12:08:06.547322 | orchestrator | 12:08:06.547 STDOUT terraform:  + name = "testbed-node-4" 2025-11-01 12:08:06.547384 | orchestrator | 12:08:06.547 STDOUT terraform:  + power_state = "active" 2025-11-01 12:08:06.547393 | orchestrator | 12:08:06.547 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.547422 | orchestrator | 12:08:06.547 STDOUT terraform:  + security_groups = (known after apply) 2025-11-01 12:08:06.547444 | orchestrator | 12:08:06.547 STDOUT terraform:  + stop_before_destroy = false 2025-11-01 12:08:06.547478 | orchestrator | 12:08:06.547 STDOUT terraform:  + updated = (known after apply) 2025-11-01 12:08:06.547526 | orchestrator | 12:08:06.547 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-01 12:08:06.547543 | orchestrator | 12:08:06.547 STDOUT terraform:  + block_device { 2025-11-01 12:08:06.547566 | orchestrator | 12:08:06.547 STDOUT terraform:  + boot_index = 0 2025-11-01 12:08:06.547593 | orchestrator | 12:08:06.547 STDOUT terraform:  + delete_on_termination = false 2025-11-01 12:08:06.547621 | orchestrator | 12:08:06.547 STDOUT terraform:  + destination_type = "volume" 2025-11-01 12:08:06.547653 | orchestrator | 12:08:06.547 STDOUT terraform:  + multiattach = false 2025-11-01 12:08:06.547678 | orchestrator | 12:08:06.547 STDOUT terraform:  + source_type = "volume" 2025-11-01 12:08:06.547715 | orchestrator | 12:08:06.547 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 12:08:06.547729 | orchestrator | 12:08:06.547 STDOUT terraform:  } 2025-11-01 12:08:06.547743 | orchestrator | 12:08:06.547 STDOUT terraform:  + network { 2025-11-01 12:08:06.547763 | orchestrator | 12:08:06.547 STDOUT terraform:  + access_network = false 2025-11-01 12:08:06.547793 | orchestrator | 12:08:06.547 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-01 12:08:06.547823 | orchestrator | 12:08:06.547 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-01 12:08:06.547853 | orchestrator | 12:08:06.547 STDOUT terraform:  + mac = (known after apply) 2025-11-01 12:08:06.547883 | orchestrator | 12:08:06.547 STDOUT terraform:  + name = (known after apply) 2025-11-01 12:08:06.547914 | orchestrator | 12:08:06.547 STDOUT terraform:  + port = (known after apply) 2025-11-01 12:08:06.547945 | orchestrator | 12:08:06.547 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 12:08:06.547958 | orchestrator | 12:08:06.547 STDOUT terraform:  } 2025-11-01 12:08:06.547972 | orchestrator | 12:08:06.547 STDOUT terraform:  } 2025-11-01 12:08:06.548016 | orchestrator | 12:08:06.547 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-11-01 12:08:06.548073 | orchestrator | 12:08:06.548 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-11-01 12:08:06.548153 | orchestrator | 12:08:06.548 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-11-01 12:08:06.548216 | orchestrator | 12:08:06.548 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-11-01 12:08:06.548251 | orchestrator | 12:08:06.548 STDOUT terraform:  + all_metadata = (known after apply) 2025-11-01 12:08:06.548296 | orchestrator | 12:08:06.548 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 12:08:06.548338 | orchestrator | 12:08:06.548 STDOUT terraform:  + availability_zone = "nova" 2025-11-01 12:08:06.548402 | orchestrator | 12:08:06.548 STDOUT terraform:  + config_drive = true 2025-11-01 12:08:06.548519 | orchestrator | 12:08:06.548 STDOUT terraform:  + created = (known after apply) 2025-11-01 12:08:06.548611 | orchestrator | 12:08:06.548 STDOUT terraform:  + flavor_id = (known after apply) 2025-11-01 12:08:06.548649 | orchestrator | 12:08:06.548 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-11-01 12:08:06.548750 | orchestrator | 12:08:06.548 STDOUT terraform:  + force_delete = false 2025-11-01 12:08:06.548919 | orchestrator | 12:08:06.548 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-11-01 12:08:06.549125 | orchestrator | 12:08:06.548 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.549278 | orchestrator | 12:08:06.549 STDOUT terraform:  + image_id = (known after apply) 2025-11-01 12:08:06.549803 | orchestrator | 12:08:06.549 STDOUT terraform:  + image_name = (known after apply) 2025-11-01 12:08:06.550059 | orchestrator | 12:08:06.549 STDOUT terraform:  + key_pair = "testbed" 2025-11-01 12:08:06.550176 | orchestrator | 12:08:06.550 STDOUT terraform:  + name = "testbed-node-5" 2025-11-01 12:08:06.550274 | orchestrator | 12:08:06.550 STDOUT terraform:  + power_state = "active" 2025-11-01 12:08:06.550522 | orchestrator | 12:08:06.550 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.550783 | orchestrator | 12:08:06.550 STDOUT terraform:  + security_groups = (known after apply) 2025-11-01 12:08:06.551015 | orchestrator | 12:08:06.550 STDOUT terraform:  + stop_before_destroy = false 2025-11-01 12:08:06.551096 | orchestrator | 12:08:06.550 STDOUT terraform:  + updated = (known after apply) 2025-11-01 12:08:06.551265 | orchestrator | 12:08:06.551 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-11-01 12:08:06.551311 | orchestrator | 12:08:06.551 STDOUT terraform:  + block_device { 2025-11-01 12:08:06.551359 | orchestrator | 12:08:06.551 STDOUT terraform:  + boot_index = 0 2025-11-01 12:08:06.551499 | orchestrator | 12:08:06.551 STDOUT terraform:  + delete_on_termination = false 2025-11-01 12:08:06.551621 | orchestrator | 12:08:06.551 STDOUT terraform:  + destination_type = "volume" 2025-11-01 12:08:06.551766 | orchestrator | 12:08:06.551 STDOUT terraform:  + multiattach = false 2025-11-01 12:08:06.551836 | orchestrator | 12:08:06.551 STDOUT terraform:  + source_type = "volume" 2025-11-01 12:08:06.551886 | orchestrator | 12:08:06.551 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 12:08:06.551899 | orchestrator | 12:08:06.551 STDOUT terraform:  } 2025-11-01 12:08:06.552008 | orchestrator | 12:08:06.551 STDOUT terraform:  + network { 2025-11-01 12:08:06.552046 | orchestrator | 12:08:06.551 STDOUT terraform:  + access_network = false 2025-11-01 12:08:06.552410 | orchestrator | 12:08:06.552 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-11-01 12:08:06.552904 | orchestrator | 12:08:06.552 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-11-01 12:08:06.553089 | orchestrator | 12:08:06.552 STDOUT terraform:  + mac = (known after apply) 2025-11-01 12:08:06.553425 | orchestrator | 12:08:06.553 STDOUT terraform:  + name = (known after apply) 2025-11-01 12:08:06.553674 | orchestrator | 12:08:06.553 STDOUT terraform:  + port = (known after apply) 2025-11-01 12:08:06.553904 | orchestrator | 12:08:06.553 STDOUT terraform:  + uuid = (known after apply) 2025-11-01 12:08:06.554175 | orchestrator | 12:08:06.553 STDOUT terraform:  } 2025-11-01 12:08:06.554275 | orchestrator | 12:08:06.554 STDOUT terraform:  } 2025-11-01 12:08:06.554501 | orchestrator | 12:08:06.554 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-11-01 12:08:06.554849 | orchestrator | 12:08:06.554 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-11-01 12:08:06.555023 | orchestrator | 12:08:06.554 STDOUT terraform:  + fingerprint = (known after apply) 2025-11-01 12:08:06.555174 | orchestrator | 12:08:06.555 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.555205 | orchestrator | 12:08:06.555 STDOUT terraform:  + name = "testbed" 2025-11-01 12:08:06.555234 | orchestrator | 12:08:06.555 STDOUT terraform:  + private_key = (sensitive value) 2025-11-01 12:08:06.555258 | orchestrator | 12:08:06.555 STDOUT terraform:  + public_key = (known after apply) 2025-11-01 12:08:06.555405 | orchestrator | 12:08:06.555 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.555585 | orchestrator | 12:08:06.555 STDOUT terraform:  + user_id = (known after apply) 2025-11-01 12:08:06.555854 | orchestrator | 12:08:06.555 STDOUT terraform:  } 2025-11-01 12:08:06.556428 | orchestrator | 12:08:06.555 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-11-01 12:08:06.556709 | orchestrator | 12:08:06.556 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-01 12:08:06.556855 | orchestrator | 12:08:06.556 STDOUT terraform:  + device = (known after apply) 2025-11-01 12:08:06.556863 | orchestrator | 12:08:06.556 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.556906 | orchestrator | 12:08:06.556 STDOUT terraform:  + instance_id = (known after apply) 2025-11-01 12:08:06.556992 | orchestrator | 12:08:06.556 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.557075 | orchestrator | 12:08:06.556 STDOUT terraform:  + volume_id = (known after apply) 2025-11-01 12:08:06.557143 | orchestrator | 12:08:06.557 STDOUT terraform:  } 2025-11-01 12:08:06.557314 | orchestrator | 12:08:06.557 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-11-01 12:08:06.557461 | orchestrator | 12:08:06.557 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-01 12:08:06.557527 | orchestrator | 12:08:06.557 STDOUT terraform:  + device = (known after apply) 2025-11-01 12:08:06.557708 | orchestrator | 12:08:06.557 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.557809 | orchestrator | 12:08:06.557 STDOUT terraform:  + instance_id = (known after apply) 2025-11-01 12:08:06.557933 | orchestrator | 12:08:06.557 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.558048 | orchestrator | 12:08:06.557 STDOUT terraform:  + volume_id = (known after apply) 2025-11-01 12:08:06.558058 | orchestrator | 12:08:06.558 STDOUT terraform:  } 2025-11-01 12:08:06.558165 | orchestrator | 12:08:06.558 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-11-01 12:08:06.558252 | orchestrator | 12:08:06.558 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-01 12:08:06.558432 | orchestrator | 12:08:06.558 STDOUT terraform:  + device = (known after apply) 2025-11-01 12:08:06.558483 | orchestrator | 12:08:06.558 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.558717 | orchestrator | 12:08:06.558 STDOUT terraform:  + instance_id = (known after apply) 2025-11-01 12:08:06.559014 | orchestrator | 12:08:06.558 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.559160 | orchestrator | 12:08:06.559 STDOUT terraform:  + volume_id = (known after apply) 2025-11-01 12:08:06.559170 | orchestrator | 12:08:06.559 STDOUT terraform:  } 2025-11-01 12:08:06.559362 | orchestrator | 12:08:06.559 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-11-01 12:08:06.559520 | orchestrator | 12:08:06.559 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-01 12:08:06.559635 | orchestrator | 12:08:06.559 STDOUT terraform:  + device = (known after apply) 2025-11-01 12:08:06.559969 | orchestrator | 12:08:06.559 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.560072 | orchestrator | 12:08:06.559 STDOUT terraform:  + instance_id = (known after apply) 2025-11-01 12:08:06.560196 | orchestrator | 12:08:06.560 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.560372 | orchestrator | 12:08:06.560 STDOUT terraform:  + volume_id = (known after apply) 2025-11-01 12:08:06.560540 | orchestrator | 12:08:06.560 STDOUT terraform:  } 2025-11-01 12:08:06.560738 | orchestrator | 12:08:06.560 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-11-01 12:08:06.560927 | orchestrator | 12:08:06.560 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-01 12:08:06.561087 | orchestrator | 12:08:06.560 STDOUT terraform:  + device = (known after apply) 2025-11-01 12:08:06.561200 | orchestrator | 12:08:06.561 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.561372 | orchestrator | 12:08:06.561 STDOUT terraform:  + instance_id = (known after apply) 2025-11-01 12:08:06.561452 | orchestrator | 12:08:06.561 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.561564 | orchestrator | 12:08:06.561 STDOUT terraform:  + volume_id = (known after apply) 2025-11-01 12:08:06.561574 | orchestrator | 12:08:06.561 STDOUT terraform:  } 2025-11-01 12:08:06.561827 | orchestrator | 12:08:06.561 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-11-01 12:08:06.562043 | orchestrator | 12:08:06.561 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-01 12:08:06.562075 | orchestrator | 12:08:06.562 STDOUT terraform:  + device = (known after apply) 2025-11-01 12:08:06.562226 | orchestrator | 12:08:06.562 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.562402 | orchestrator | 12:08:06.562 STDOUT terraform:  + instance_id = (known after apply) 2025-11-01 12:08:06.562497 | orchestrator | 12:08:06.562 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.562570 | orchestrator | 12:08:06.562 STDOUT terraform:  + volume_id = (known after apply) 2025-11-01 12:08:06.562580 | orchestrator | 12:08:06.562 STDOUT terraform:  } 2025-11-01 12:08:06.562669 | orchestrator | 12:08:06.562 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-11-01 12:08:06.562761 | orchestrator | 12:08:06.562 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-01 12:08:06.562821 | orchestrator | 12:08:06.562 STDOUT terraform:  + device = (known after apply) 2025-11-01 12:08:06.562970 | orchestrator | 12:08:06.562 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.563064 | orchestrator | 12:08:06.562 STDOUT terraform:  + instance_id = (known after apply) 2025-11-01 12:08:06.563232 | orchestrator | 12:08:06.563 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.563303 | orchestrator | 12:08:06.563 STDOUT terraform:  + volume_id = (known after apply) 2025-11-01 12:08:06.563330 | orchestrator | 12:08:06.563 STDOUT terraform:  } 2025-11-01 12:08:06.563471 | orchestrator | 12:08:06.563 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-11-01 12:08:06.563626 | orchestrator | 12:08:06.563 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-01 12:08:06.563745 | orchestrator | 12:08:06.563 STDOUT terraform:  + device = (known after apply) 2025-11-01 12:08:06.563936 | orchestrator | 12:08:06.563 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.564073 | orchestrator | 12:08:06.563 STDOUT terraform:  + instance_id = (known after apply) 2025-11-01 12:08:06.564216 | orchestrator | 12:08:06.564 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.564246 | orchestrator | 12:08:06.564 STDOUT terraform:  + volume_id = (known after apply) 2025-11-01 12:08:06.564307 | orchestrator | 12:08:06.564 STDOUT terraform:  } 2025-11-01 12:08:06.564505 | orchestrator | 12:08:06.564 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-11-01 12:08:06.564636 | orchestrator | 12:08:06.564 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-11-01 12:08:06.564848 | orchestrator | 12:08:06.564 STDOUT terraform:  + device = (known after apply) 2025-11-01 12:08:06.564965 | orchestrator | 12:08:06.564 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.565045 | orchestrator | 12:08:06.564 STDOUT terraform:  + instance_id = (known after apply) 2025-11-01 12:08:06.565127 | orchestrator | 12:08:06.565 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.565373 | orchestrator | 12:08:06.565 STDOUT terraform:  + volume_id = (known after apply) 2025-11-01 12:08:06.565384 | orchestrator | 12:08:06.565 STDOUT terraform:  } 2025-11-01 12:08:06.565615 | orchestrator | 12:08:06.565 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-11-01 12:08:06.565844 | orchestrator | 12:08:06.565 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-11-01 12:08:06.566004 | orchestrator | 12:08:06.565 STDOUT terraform:  + fixed_ip = (known after apply) 2025-11-01 12:08:06.566117 | orchestrator | 12:08:06.565 STDOUT terraform:  + floating_ip = (known after apply) 2025-11-01 12:08:06.566222 | orchestrator | 12:08:06.566 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.566308 | orchestrator | 12:08:06.566 STDOUT terraform:  + port_id = (known after apply) 2025-11-01 12:08:06.566480 | orchestrator | 12:08:06.566 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.566500 | orchestrator | 12:08:06.566 STDOUT terraform:  } 2025-11-01 12:08:06.566640 | orchestrator | 12:08:06.566 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-11-01 12:08:06.566855 | orchestrator | 12:08:06.566 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-11-01 12:08:06.566940 | orchestrator | 12:08:06.566 STDOUT terraform:  + address = (known after apply) 2025-11-01 12:08:06.567025 | orchestrator | 12:08:06.566 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 12:08:06.567074 | orchestrator | 12:08:06.567 STDOUT terraform:  + dns_domain = (known after apply) 2025-11-01 12:08:06.567172 | orchestrator | 12:08:06.567 STDOUT terraform:  + dns_name = (known after apply) 2025-11-01 12:08:06.567326 | orchestrator | 12:08:06.567 STDOUT terraform:  + fixed_ip = (known after apply) 2025-11-01 12:08:06.567404 | orchestrator | 12:08:06.567 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.567509 | orchestrator | 12:08:06.567 STDOUT terraform:  + pool = "public" 2025-11-01 12:08:06.567539 | orchestrator | 12:08:06.567 STDOUT terraform:  + port_id = (known after apply) 2025-11-01 12:08:06.567626 | orchestrator | 12:08:06.567 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.567719 | orchestrator | 12:08:06.567 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-01 12:08:06.567816 | orchestrator | 12:08:06.567 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.567849 | orchestrator | 12:08:06.567 STDOUT terraform:  } 2025-11-01 12:08:06.568116 | orchestrator | 12:08:06.567 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-11-01 12:08:06.568178 | orchestrator | 12:08:06.568 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-11-01 12:08:06.568268 | orchestrator | 12:08:06.568 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-01 12:08:06.568508 | orchestrator | 12:08:06.568 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 12:08:06.568614 | orchestrator | 12:08:06.568 STDOUT terraform:  + availability_zone_hints = [ 2025-11-01 12:08:06.568621 | orchestrator | 12:08:06.568 STDOUT terraform:  + "nova", 2025-11-01 12:08:06.568641 | orchestrator | 12:08:06.568 STDOUT terraform:  ] 2025-11-01 12:08:06.568775 | orchestrator | 12:08:06.568 STDOUT terraform:  + dns_domain = (known after apply) 2025-11-01 12:08:06.568872 | orchestrator | 12:08:06.568 STDOUT terraform:  + external = (known after apply) 2025-11-01 12:08:06.569072 | orchestrator | 12:08:06.568 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.569285 | orchestrator | 12:08:06.569 STDOUT terraform:  + mtu = (known after apply) 2025-11-01 12:08:06.569433 | orchestrator | 12:08:06.569 STDOUT terraform:  + name = "net-testbed-management" 2025-11-01 12:08:06.569570 | orchestrator | 12:08:06.569 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-01 12:08:06.569687 | orchestrator | 12:08:06.569 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-01 12:08:06.569749 | orchestrator | 12:08:06.569 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.569851 | orchestrator | 12:08:06.569 STDOUT terraform:  + shared = (known after apply) 2025-11-01 12:08:06.570070 | orchestrator | 12:08:06.569 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.570216 | orchestrator | 12:08:06.570 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-11-01 12:08:06.570340 | orchestrator | 12:08:06.570 STDOUT terraform:  + segments (known after apply) 2025-11-01 12:08:06.570468 | orchestrator | 12:08:06.570 STDOUT terraform:  } 2025-11-01 12:08:06.570649 | orchestrator | 12:08:06.570 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-11-01 12:08:06.570848 | orchestrator | 12:08:06.570 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-11-01 12:08:06.570911 | orchestrator | 12:08:06.570 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-01 12:08:06.571003 | orchestrator | 12:08:06.570 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-01 12:08:06.571085 | orchestrator | 12:08:06.570 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-01 12:08:06.571139 | orchestrator | 12:08:06.571 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 12:08:06.571207 | orchestrator | 12:08:06.571 STDOUT terraform:  + device_id = (known after apply) 2025-11-01 12:08:06.571397 | orchestrator | 12:08:06.571 STDOUT terraform:  + device_owner = (known after apply) 2025-11-01 12:08:06.571501 | orchestrator | 12:08:06.571 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-01 12:08:06.571569 | orchestrator | 12:08:06.571 STDOUT terraform:  + dns_name = (known after apply) 2025-11-01 12:08:06.571748 | orchestrator | 12:08:06.571 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.571819 | orchestrator | 12:08:06.571 STDOUT terraform:  + mac_address = (known after apply) 2025-11-01 12:08:06.572009 | orchestrator | 12:08:06.571 STDOUT terraform:  + network_id = (known after apply) 2025-11-01 12:08:06.572212 | orchestrator | 12:08:06.572 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-01 12:08:06.572395 | orchestrator | 12:08:06.572 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-01 12:08:06.572510 | orchestrator | 12:08:06.572 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.572695 | orchestrator | 12:08:06.572 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-01 12:08:06.572830 | orchestrator | 12:08:06.572 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.572881 | orchestrator | 12:08:06.572 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 12:08:06.572976 | orchestrator | 12:08:06.572 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-01 12:08:06.573024 | orchestrator | 12:08:06.572 STDOUT terraform:  } 2025-11-01 12:08:06.573142 | orchestrator | 12:08:06.573 STDOUT terraform:  + binding (known after apply) 2025-11-01 12:08:06.573180 | orchestrator | 12:08:06.573 STDOUT terraform:  + fixed_ip { 2025-11-01 12:08:06.573220 | orchestrator | 12:08:06.573 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-11-01 12:08:06.573259 | orchestrator | 12:08:06.573 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-01 12:08:06.573337 | orchestrator | 12:08:06.573 STDOUT terraform:  } 2025-11-01 12:08:06.573354 | orchestrator | 12:08:06.573 STDOUT terraform:  } 2025-11-01 12:08:06.573418 | orchestrator | 12:08:06.573 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-11-01 12:08:06.573481 | orchestrator | 12:08:06.573 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-01 12:08:06.573571 | orchestrator | 12:08:06.573 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-01 12:08:06.573632 | orchestrator | 12:08:06.573 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-01 12:08:06.573748 | orchestrator | 12:08:06.573 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-01 12:08:06.573844 | orchestrator | 12:08:06.573 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 12:08:06.573983 | orchestrator | 12:08:06.573 STDOUT terraform:  + device_id = (known after apply) 2025-11-01 12:08:06.574098 | orchestrator | 12:08:06.573 STDOUT terraform:  + device_owner = (known after apply) 2025-11-01 12:08:06.574245 | orchestrator | 12:08:06.574 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-01 12:08:06.574425 | orchestrator | 12:08:06.574 STDOUT terraform:  + dns_name = (known after apply) 2025-11-01 12:08:06.574582 | orchestrator | 12:08:06.574 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.574692 | orchestrator | 12:08:06.574 STDOUT terraform:  + mac_address = (known after apply) 2025-11-01 12:08:06.574788 | orchestrator | 12:08:06.574 STDOUT terraform:  + network_id = (known after apply) 2025-11-01 12:08:06.575042 | orchestrator | 12:08:06.574 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-01 12:08:06.575115 | orchestrator | 12:08:06.575 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-01 12:08:06.575241 | orchestrator | 12:08:06.575 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.575305 | orchestrator | 12:08:06.575 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-01 12:08:06.575503 | orchestrator | 12:08:06.575 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.575606 | orchestrator | 12:08:06.575 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 12:08:06.575693 | orchestrator | 12:08:06.575 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-01 12:08:06.575700 | orchestrator | 12:08:06.575 STDOUT terraform:  } 2025-11-01 12:08:06.575739 | orchestrator | 12:08:06.575 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 12:08:06.575834 | orchestrator | 12:08:06.575 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-01 12:08:06.575877 | orchestrator | 12:08:06.575 STDOUT terraform:  } 2025-11-01 12:08:06.575889 | orchestrator | 12:08:06.575 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 12:08:06.576016 | orchestrator | 12:08:06.575 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-01 12:08:06.576054 | orchestrator | 12:08:06.575 STDOUT terraform:  } 2025-11-01 12:08:06.576216 | orchestrator | 12:08:06.576 STDOUT terraform:  + binding (known after apply) 2025-11-01 12:08:06.576397 | orchestrator | 12:08:06.576 STDOUT terraform:  + fixed_ip { 2025-11-01 12:08:06.586210 | orchestrator | 12:08:06.576 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-11-01 12:08:06.586333 | orchestrator | 12:08:06.586 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-01 12:08:06.586386 | orchestrator | 12:08:06.586 STDOUT terraform:  } 2025-11-01 12:08:06.586410 | orchestrator | 12:08:06.586 STDOUT terraform:  } 2025-11-01 12:08:06.586468 | orchestrator | 12:08:06.586 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-11-01 12:08:06.586521 | orchestrator | 12:08:06.586 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-01 12:08:06.586565 | orchestrator | 12:08:06.586 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-01 12:08:06.586626 | orchestrator | 12:08:06.586 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-01 12:08:06.586682 | orchestrator | 12:08:06.586 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-01 12:08:06.586726 | orchestrator | 12:08:06.586 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 12:08:06.586830 | orchestrator | 12:08:06.586 STDOUT terraform:  + device_id = (known after apply) 2025-11-01 12:08:06.586884 | orchestrator | 12:08:06.586 STDOUT terraform:  + device_owner = (known after apply) 2025-11-01 12:08:06.586935 | orchestrator | 12:08:06.586 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-01 12:08:06.586981 | orchestrator | 12:08:06.586 STDOUT terraform:  + dns_name = (known after apply) 2025-11-01 12:08:06.587089 | orchestrator | 12:08:06.586 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.587138 | orchestrator | 12:08:06.587 STDOUT terraform:  + mac_address = (known after apply) 2025-11-01 12:08:06.587198 | orchestrator | 12:08:06.587 STDOUT terraform:  + network_id = (known after apply) 2025-11-01 12:08:06.587255 | orchestrator | 12:08:06.587 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-01 12:08:06.587318 | orchestrator | 12:08:06.587 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-01 12:08:06.587406 | orchestrator | 12:08:06.587 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.587460 | orchestrator | 12:08:06.587 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-01 12:08:06.587503 | orchestrator | 12:08:06.587 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.587531 | orchestrator | 12:08:06.587 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 12:08:06.587574 | orchestrator | 12:08:06.587 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-01 12:08:06.587603 | orchestrator | 12:08:06.587 STDOUT terraform:  } 2025-11-01 12:08:06.587637 | orchestrator | 12:08:06.587 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 12:08:06.587672 | orchestrator | 12:08:06.587 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-01 12:08:06.587692 | orchestrator | 12:08:06.587 STDOUT terraform:  } 2025-11-01 12:08:06.587735 | orchestrator | 12:08:06.587 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 12:08:06.587772 | orchestrator | 12:08:06.587 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-01 12:08:06.587792 | orchestrator | 12:08:06.587 STDOUT terraform:  } 2025-11-01 12:08:06.587822 | orchestrator | 12:08:06.587 STDOUT terraform:  + binding (known after apply) 2025-11-01 12:08:06.587849 | orchestrator | 12:08:06.587 STDOUT terraform:  + fixed_ip { 2025-11-01 12:08:06.587882 | orchestrator | 12:08:06.587 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-11-01 12:08:06.587917 | orchestrator | 12:08:06.587 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-01 12:08:06.587938 | orchestrator | 12:08:06.587 STDOUT terraform:  } 2025-11-01 12:08:06.587971 | orchestrator | 12:08:06.587 STDOUT terraform:  } 2025-11-01 12:08:06.588026 | orchestrator | 12:08:06.587 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-11-01 12:08:06.588080 | orchestrator | 12:08:06.588 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-01 12:08:06.588134 | orchestrator | 12:08:06.588 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-01 12:08:06.588177 | orchestrator | 12:08:06.588 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-01 12:08:06.588227 | orchestrator | 12:08:06.588 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-01 12:08:06.588271 | orchestrator | 12:08:06.588 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 12:08:06.588313 | orchestrator | 12:08:06.588 STDOUT terraform:  + device_id = (known after apply) 2025-11-01 12:08:06.588372 | orchestrator | 12:08:06.588 STDOUT terraform:  + device_owner = (known after apply) 2025-11-01 12:08:06.588417 | orchestrator | 12:08:06.588 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-01 12:08:06.588459 | orchestrator | 12:08:06.588 STDOUT terraform:  + dns_name = (known after apply) 2025-11-01 12:08:06.588503 | orchestrator | 12:08:06.588 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.588552 | orchestrator | 12:08:06.588 STDOUT terraform:  + mac_address = (known after apply) 2025-11-01 12:08:06.588605 | orchestrator | 12:08:06.588 STDOUT terraform:  + network_id = (known after apply) 2025-11-01 12:08:06.588666 | orchestrator | 12:08:06.588 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-01 12:08:06.588710 | orchestrator | 12:08:06.588 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-01 12:08:06.588731 | orchestrator | 12:08:06.588 STDOUT terraform:  + 2025-11-01 12:08:06.588849 | orchestrator | 12:08:06.588 STDOUT terraform:  region = (known after apply) 2025-11-01 12:08:06.588922 | orchestrator | 12:08:06.588 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-01 12:08:06.588969 | orchestrator | 12:08:06.588 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.588996 | orchestrator | 12:08:06.588 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 12:08:06.589031 | orchestrator | 12:08:06.589 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-01 12:08:06.589052 | orchestrator | 12:08:06.589 STDOUT terraform:  } 2025-11-01 12:08:06.589078 | orchestrator | 12:08:06.589 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 12:08:06.589123 | orchestrator | 12:08:06.589 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-01 12:08:06.589147 | orchestrator | 12:08:06.589 STDOUT terraform:  } 2025-11-01 12:08:06.589174 | orchestrator | 12:08:06.589 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 12:08:06.589214 | orchestrator | 12:08:06.589 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-01 12:08:06.589248 | orchestrator | 12:08:06.589 STDOUT terraform:  } 2025-11-01 12:08:06.589279 | orchestrator | 12:08:06.589 STDOUT terraform:  + binding (known after apply) 2025-11-01 12:08:06.589299 | orchestrator | 12:08:06.589 STDOUT terraform:  + fixed_ip { 2025-11-01 12:08:06.589330 | orchestrator | 12:08:06.589 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-11-01 12:08:06.589377 | orchestrator | 12:08:06.589 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-01 12:08:06.589398 | orchestrator | 12:08:06.589 STDOUT terraform:  } 2025-11-01 12:08:06.589429 | orchestrator | 12:08:06.589 STDOUT terraform:  } 2025-11-01 12:08:06.589483 | orchestrator | 12:08:06.589 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-11-01 12:08:06.589551 | orchestrator | 12:08:06.589 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-01 12:08:06.589598 | orchestrator | 12:08:06.589 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-01 12:08:06.589641 | orchestrator | 12:08:06.589 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-01 12:08:06.589684 | orchestrator | 12:08:06.589 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-01 12:08:06.589733 | orchestrator | 12:08:06.589 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 12:08:06.589776 | orchestrator | 12:08:06.589 STDOUT terraform:  + device_id = (known after apply) 2025-11-01 12:08:06.589817 | orchestrator | 12:08:06.589 STDOUT terraform:  + device_owner = (known after apply) 2025-11-01 12:08:06.589857 | orchestrator | 12:08:06.589 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-01 12:08:06.589907 | orchestrator | 12:08:06.589 STDOUT terraform:  + dns_name = (known after apply) 2025-11-01 12:08:06.589974 | orchestrator | 12:08:06.589 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.590061 | orchestrator | 12:08:06.589 STDOUT terraform:  + mac_address = (known after apply) 2025-11-01 12:08:06.590106 | orchestrator | 12:08:06.590 STDOUT terraform:  + network_id = (known after apply) 2025-11-01 12:08:06.590155 | orchestrator | 12:08:06.590 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-01 12:08:06.590201 | orchestrator | 12:08:06.590 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-01 12:08:06.590244 | orchestrator | 12:08:06.590 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.590293 | orchestrator | 12:08:06.590 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-01 12:08:06.590337 | orchestrator | 12:08:06.590 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.590382 | orchestrator | 12:08:06.590 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 12:08:06.590421 | orchestrator | 12:08:06.590 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-01 12:08:06.590453 | orchestrator | 12:08:06.590 STDOUT terraform:  } 2025-11-01 12:08:06.590491 | orchestrator | 12:08:06.590 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 12:08:06.590533 | orchestrator | 12:08:06.590 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-01 12:08:06.590555 | orchestrator | 12:08:06.590 STDOUT terraform:  } 2025-11-01 12:08:06.590597 | orchestrator | 12:08:06.590 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 12:08:06.590632 | orchestrator | 12:08:06.590 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-01 12:08:06.590653 | orchestrator | 12:08:06.590 STDOUT terraform:  } 2025-11-01 12:08:06.590683 | orchestrator | 12:08:06.590 STDOUT terraform:  + binding (known after apply) 2025-11-01 12:08:06.590704 | orchestrator | 12:08:06.590 STDOUT terraform:  + fixed_ip { 2025-11-01 12:08:06.590734 | orchestrator | 12:08:06.590 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-11-01 12:08:06.590776 | orchestrator | 12:08:06.590 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-01 12:08:06.590809 | orchestrator | 12:08:06.590 STDOUT terraform:  } 2025-11-01 12:08:06.590831 | orchestrator | 12:08:06.590 STDOUT terraform:  } 2025-11-01 12:08:06.590885 | orchestrator | 12:08:06.590 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-11-01 12:08:06.590936 | orchestrator | 12:08:06.590 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-01 12:08:06.590986 | orchestrator | 12:08:06.590 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-01 12:08:06.591029 | orchestrator | 12:08:06.590 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-01 12:08:06.591069 | orchestrator | 12:08:06.591 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-01 12:08:06.591111 | orchestrator | 12:08:06.591 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 12:08:06.591165 | orchestrator | 12:08:06.591 STDOUT terraform:  + device_id = (known after apply) 2025-11-01 12:08:06.591209 | orchestrator | 12:08:06.591 STDOUT terraform:  + device_owner = (known after apply) 2025-11-01 12:08:06.591267 | orchestrator | 12:08:06.591 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-01 12:08:06.591311 | orchestrator | 12:08:06.591 STDOUT terraform:  + dns_name = (known after apply) 2025-11-01 12:08:06.591379 | orchestrator | 12:08:06.591 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.591425 | orchestrator | 12:08:06.591 STDOUT terraform:  + mac_address = (known after apply) 2025-11-01 12:08:06.591471 | orchestrator | 12:08:06.591 STDOUT terraform:  + network_id = (known after apply) 2025-11-01 12:08:06.591514 | orchestrator | 12:08:06.591 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-01 12:08:06.591563 | orchestrator | 12:08:06.591 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-01 12:08:06.591606 | orchestrator | 12:08:06.591 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.591661 | orchestrator | 12:08:06.591 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-01 12:08:06.591712 | orchestrator | 12:08:06.591 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.591740 | orchestrator | 12:08:06.591 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 12:08:06.591790 | orchestrator | 12:08:06.591 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-01 12:08:06.591813 | orchestrator | 12:08:06.591 STDOUT terraform:  } 2025-11-01 12:08:06.591840 | orchestrator | 12:08:06.591 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 12:08:06.591874 | orchestrator | 12:08:06.591 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-01 12:08:06.591895 | orchestrator | 12:08:06.591 STDOUT terraform:  } 2025-11-01 12:08:06.591926 | orchestrator | 12:08:06.591 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 12:08:06.591963 | orchestrator | 12:08:06.591 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-01 12:08:06.591983 | orchestrator | 12:08:06.591 STDOUT terraform:  } 2025-11-01 12:08:06.592013 | orchestrator | 12:08:06.591 STDOUT terraform:  + binding (known after apply) 2025-11-01 12:08:06.592034 | orchestrator | 12:08:06.592 STDOUT terraform:  + fixed_ip { 2025-11-01 12:08:06.592080 | orchestrator | 12:08:06.592 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-11-01 12:08:06.592117 | orchestrator | 12:08:06.592 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-01 12:08:06.592136 | orchestrator | 12:08:06.592 STDOUT terraform:  } 2025-11-01 12:08:06.592162 | orchestrator | 12:08:06.592 STDOUT terraform:  } 2025-11-01 12:08:06.592216 | orchestrator | 12:08:06.592 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-11-01 12:08:06.592266 | orchestrator | 12:08:06.592 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-11-01 12:08:06.592308 | orchestrator | 12:08:06.592 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-01 12:08:06.592369 | orchestrator | 12:08:06.592 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-11-01 12:08:06.592415 | orchestrator | 12:08:06.592 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-11-01 12:08:06.592463 | orchestrator | 12:08:06.592 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 12:08:06.592505 | orchestrator | 12:08:06.592 STDOUT terraform:  + device_id = (known after apply) 2025-11-01 12:08:06.592561 | orchestrator | 12:08:06.592 STDOUT terraform:  + device_owner = (known after apply) 2025-11-01 12:08:06.592604 | orchestrator | 12:08:06.592 STDOUT terraform:  + dns_assignment = (known after apply) 2025-11-01 12:08:06.592646 | orchestrator | 12:08:06.592 STDOUT terraform:  + dns_name = (known after apply) 2025-11-01 12:08:06.592694 | orchestrator | 12:08:06.592 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.592756 | orchestrator | 12:08:06.592 STDOUT terraform:  + mac_address = (known after apply) 2025-11-01 12:08:06.592799 | orchestrator | 12:08:06.592 STDOUT terraform:  + network_id = (known after apply) 2025-11-01 12:08:06.592840 | orchestrator | 12:08:06.592 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-11-01 12:08:06.592886 | orchestrator | 12:08:06.592 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-11-01 12:08:06.592930 | orchestrator | 12:08:06.592 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.592971 | orchestrator | 12:08:06.592 STDOUT terraform:  + security_group_ids = (known after apply) 2025-11-01 12:08:06.593026 | orchestrator | 12:08:06.592 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.593056 | orchestrator | 12:08:06.593 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 12:08:06.593091 | orchestrator | 12:08:06.593 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-11-01 12:08:06.593111 | orchestrator | 12:08:06.593 STDOUT terraform:  } 2025-11-01 12:08:06.593138 | orchestrator | 12:08:06.593 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 12:08:06.593171 | orchestrator | 12:08:06.593 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-11-01 12:08:06.593199 | orchestrator | 12:08:06.593 STDOUT terraform:  } 2025-11-01 12:08:06.593227 | orchestrator | 12:08:06.593 STDOUT terraform:  + allowed_address_pairs { 2025-11-01 12:08:06.593261 | orchestrator | 12:08:06.593 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-11-01 12:08:06.593281 | orchestrator | 12:08:06.593 STDOUT terraform:  } 2025-11-01 12:08:06.593318 | orchestrator | 12:08:06.593 STDOUT terraform:  + binding (known after apply) 2025-11-01 12:08:06.593386 | orchestrator | 12:08:06.593 STDOUT terraform:  + fixed_ip { 2025-11-01 12:08:06.593423 | orchestrator | 12:08:06.593 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-11-01 12:08:06.593459 | orchestrator | 12:08:06.593 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-01 12:08:06.593479 | orchestrator | 12:08:06.593 STDOUT terraform:  } 2025-11-01 12:08:06.593498 | orchestrator | 12:08:06.593 STDOUT terraform:  } 2025-11-01 12:08:06.593551 | orchestrator | 12:08:06.593 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-11-01 12:08:06.593611 | orchestrator | 12:08:06.593 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-11-01 12:08:06.593645 | orchestrator | 12:08:06.593 STDOUT terraform:  + force_destroy = false 2025-11-01 12:08:06.593680 | orchestrator | 12:08:06.593 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.593721 | orchestrator | 12:08:06.593 STDOUT terraform:  + port_id = (known after apply) 2025-11-01 12:08:06.593767 | orchestrator | 12:08:06.593 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.593815 | orchestrator | 12:08:06.593 STDOUT terraform:  + router_id = (known after apply) 2025-11-01 12:08:06.593856 | orchestrator | 12:08:06.593 STDOUT terraform:  + subnet_id = (known after apply) 2025-11-01 12:08:06.593879 | orchestrator | 12:08:06.593 STDOUT terraform:  } 2025-11-01 12:08:06.593920 | orchestrator | 12:08:06.593 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-11-01 12:08:06.593971 | orchestrator | 12:08:06.593 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-11-01 12:08:06.594029 | orchestrator | 12:08:06.593 STDOUT terraform:  + admin_state_up = (known after apply) 2025-11-01 12:08:06.594074 | orchestrator | 12:08:06.594 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 12:08:06.594110 | orchestrator | 12:08:06.594 STDOUT terraform:  + availability_zone_hints = [ 2025-11-01 12:08:06.594133 | orchestrator | 12:08:06.594 STDOUT terraform:  + "nova", 2025-11-01 12:08:06.594154 | orchestrator | 12:08:06.594 STDOUT terraform:  ] 2025-11-01 12:08:06.594211 | orchestrator | 12:08:06.594 STDOUT terraform:  + distributed = (known after apply) 2025-11-01 12:08:06.594257 | orchestrator | 12:08:06.594 STDOUT terraform:  + enable_snat = (known after apply) 2025-11-01 12:08:06.594313 | orchestrator | 12:08:06.594 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-11-01 12:08:06.594370 | orchestrator | 12:08:06.594 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-11-01 12:08:06.594413 | orchestrator | 12:08:06.594 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.594509 | orchestrator | 12:08:06.594 STDOUT terraform:  + name = "testbed" 2025-11-01 12:08:06.594563 | orchestrator | 12:08:06.594 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.594608 | orchestrator | 12:08:06.594 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.594644 | orchestrator | 12:08:06.594 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-11-01 12:08:06.594664 | orchestrator | 12:08:06.594 STDOUT terraform:  } 2025-11-01 12:08:06.594746 | orchestrator | 12:08:06.594 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-11-01 12:08:06.594815 | orchestrator | 12:08:06.594 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-11-01 12:08:06.594854 | orchestrator | 12:08:06.594 STDOUT terraform:  + description = "ssh" 2025-11-01 12:08:06.594891 | orchestrator | 12:08:06.594 STDOUT terraform:  + direction = "ingress" 2025-11-01 12:08:06.594923 | orchestrator | 12:08:06.594 STDOUT terraform:  + ethertype = "IPv4" 2025-11-01 12:08:06.594966 | orchestrator | 12:08:06.594 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.595007 | orchestrator | 12:08:06.594 STDOUT terraform:  + port_range_max = 22 2025-11-01 12:08:06.595048 | orchestrator | 12:08:06.595 STDOUT terraform:  + port_range_min = 22 2025-11-01 12:08:06.595091 | orchestrator | 12:08:06.595 STDOUT terraform:  + protocol = "tcp" 2025-11-01 12:08:06.595141 | orchestrator | 12:08:06.595 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.595184 | orchestrator | 12:08:06.595 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-01 12:08:06.595235 | orchestrator | 12:08:06.595 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-01 12:08:06.595272 | orchestrator | 12:08:06.595 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-01 12:08:06.595321 | orchestrator | 12:08:06.595 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-01 12:08:06.595427 | orchestrator | 12:08:06.595 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.595500 | orchestrator | 12:08:06.595 STDOUT terraform:  } 2025-11-01 12:08:06.595562 | orchestrator | 12:08:06.595 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-11-01 12:08:06.595629 | orchestrator | 12:08:06.595 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-11-01 12:08:06.595673 | orchestrator | 12:08:06.595 STDOUT terraform:  + description = "wireguard" 2025-11-01 12:08:06.595711 | orchestrator | 12:08:06.595 STDOUT terraform:  + direction = "ingress" 2025-11-01 12:08:06.595743 | orchestrator | 12:08:06.595 STDOUT terraform:  + ethertype = "IPv4" 2025-11-01 12:08:06.595799 | orchestrator | 12:08:06.595 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.595831 | orchestrator | 12:08:06.595 STDOUT terraform:  + port_range_max = 51820 2025-11-01 12:08:06.595862 | orchestrator | 12:08:06.595 STDOUT terraform:  + port_range_min = 51820 2025-11-01 12:08:06.595901 | orchestrator | 12:08:06.595 STDOUT terraform:  + protocol = "udp" 2025-11-01 12:08:06.595955 | orchestrator | 12:08:06.595 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.595999 | orchestrator | 12:08:06.595 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-01 12:08:06.596041 | orchestrator | 12:08:06.596 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-01 12:08:06.596077 | orchestrator | 12:08:06.596 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-01 12:08:06.596119 | orchestrator | 12:08:06.596 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-01 12:08:06.596168 | orchestrator | 12:08:06.596 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.596189 | orchestrator | 12:08:06.596 STDOUT terraform:  } 2025-11-01 12:08:06.596256 | orchestrator | 12:08:06.596 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-11-01 12:08:06.596316 | orchestrator | 12:08:06.596 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-11-01 12:08:06.596371 | orchestrator | 12:08:06.596 STDOUT terraform:  + direction = "ingress" 2025-11-01 12:08:06.596408 | orchestrator | 12:08:06.596 STDOUT terraform:  + ethertype = "IPv4" 2025-11-01 12:08:06.596463 | orchestrator | 12:08:06.596 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.596499 | orchestrator | 12:08:06.596 STDOUT terraform:  + protocol = "tcp" 2025-11-01 12:08:06.596542 | orchestrator | 12:08:06.596 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.596595 | orchestrator | 12:08:06.596 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-01 12:08:06.596640 | orchestrator | 12:08:06.596 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-01 12:08:06.596681 | orchestrator | 12:08:06.596 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-11-01 12:08:06.596739 | orchestrator | 12:08:06.596 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-01 12:08:06.596783 | orchestrator | 12:08:06.596 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.596803 | orchestrator | 12:08:06.596 STDOUT terraform:  } 2025-11-01 12:08:06.596868 | orchestrator | 12:08:06.596 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-11-01 12:08:06.596928 | orchestrator | 12:08:06.596 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-11-01 12:08:06.596963 | orchestrator | 12:08:06.596 STDOUT terraform:  + direction = "ingress" 2025-11-01 12:08:06.597006 | orchestrator | 12:08:06.596 STDOUT terraform:  + ethertype = "IPv4" 2025-11-01 12:08:06.597057 | orchestrator | 12:08:06.597 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.597090 | orchestrator | 12:08:06.597 STDOUT terraform:  + protocol = "udp" 2025-11-01 12:08:06.597138 | orchestrator | 12:08:06.597 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.597181 | orchestrator | 12:08:06.597 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-01 12:08:06.597233 | orchestrator | 12:08:06.597 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-01 12:08:06.597276 | orchestrator | 12:08:06.597 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-11-01 12:08:06.597323 | orchestrator | 12:08:06.597 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-01 12:08:06.597391 | orchestrator | 12:08:06.597 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.597414 | orchestrator | 12:08:06.597 STDOUT terraform:  } 2025-11-01 12:08:06.597472 | orchestrator | 12:08:06.597 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-11-01 12:08:06.597537 | orchestrator | 12:08:06.597 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-11-01 12:08:06.597574 | orchestrator | 12:08:06.597 STDOUT terraform:  + direction = "ingress" 2025-11-01 12:08:06.597613 | orchestrator | 12:08:06.597 STDOUT terraform:  + ethertype = "IPv4" 2025-11-01 12:08:06.597657 | orchestrator | 12:08:06.597 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.597705 | orchestrator | 12:08:06.597 STDOUT terraform:  + protocol = "icmp" 2025-11-01 12:08:06.597763 | orchestrator | 12:08:06.597 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.597807 | orchestrator | 12:08:06.597 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-01 12:08:06.597849 | orchestrator | 12:08:06.597 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-01 12:08:06.597884 | orchestrator | 12:08:06.597 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-01 12:08:06.597925 | orchestrator | 12:08:06.597 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-01 12:08:06.597974 | orchestrator | 12:08:06.597 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.597995 | orchestrator | 12:08:06.597 STDOUT terraform:  } 2025-11-01 12:08:06.598085 | orchestrator | 12:08:06.598 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-11-01 12:08:06.598145 | orchestrator | 12:08:06.598 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-11-01 12:08:06.598181 | orchestrator | 12:08:06.598 STDOUT terraform:  + direction = "ingress" 2025-11-01 12:08:06.598227 | orchestrator | 12:08:06.598 STDOUT terraform:  + ethertype = "IPv4" 2025-11-01 12:08:06.598272 | orchestrator | 12:08:06.598 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.598303 | orchestrator | 12:08:06.598 STDOUT terraform:  + protocol = "tcp" 2025-11-01 12:08:06.598371 | orchestrator | 12:08:06.598 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.598422 | orchestrator | 12:08:06.598 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-01 12:08:06.598466 | orchestrator | 12:08:06.598 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-01 12:08:06.598503 | orchestrator | 12:08:06.598 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-01 12:08:06.598544 | orchestrator | 12:08:06.598 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-01 12:08:06.598586 | orchestrator | 12:08:06.598 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.598607 | orchestrator | 12:08:06.598 STDOUT terraform:  } 2025-11-01 12:08:06.598682 | orchestrator | 12:08:06.598 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-11-01 12:08:06.598740 | orchestrator | 12:08:06.598 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-11-01 12:08:06.598782 | orchestrator | 12:08:06.598 STDOUT terraform:  + direction = "ingress" 2025-11-01 12:08:06.598815 | orchestrator | 12:08:06.598 STDOUT terraform:  + ethertype = "IPv4" 2025-11-01 12:08:06.598857 | orchestrator | 12:08:06.598 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.598888 | orchestrator | 12:08:06.598 STDOUT terraform:  + protocol = "udp" 2025-11-01 12:08:06.598931 | orchestrator | 12:08:06.598 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.598985 | orchestrator | 12:08:06.598 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-01 12:08:06.599042 | orchestrator | 12:08:06.599 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-01 12:08:06.599080 | orchestrator | 12:08:06.599 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-01 12:08:06.599123 | orchestrator | 12:08:06.599 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-01 12:08:06.599178 | orchestrator | 12:08:06.599 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.599207 | orchestrator | 12:08:06.599 STDOUT terraform:  } 2025-11-01 12:08:06.599264 | orchestrator | 12:08:06.599 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-11-01 12:08:06.599325 | orchestrator | 12:08:06.599 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-11-01 12:08:06.599372 | orchestrator | 12:08:06.599 STDOUT terraform:  + direction = "ingress" 2025-11-01 12:08:06.599411 | orchestrator | 12:08:06.599 STDOUT terraform:  + ethertype = "IPv4" 2025-11-01 12:08:06.599462 | orchestrator | 12:08:06.599 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.599495 | orchestrator | 12:08:06.599 STDOUT terraform:  + protocol = "icmp" 2025-11-01 12:08:06.599537 | orchestrator | 12:08:06.599 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.599584 | orchestrator | 12:08:06.599 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-01 12:08:06.599643 | orchestrator | 12:08:06.599 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-01 12:08:06.599687 | orchestrator | 12:08:06.599 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-01 12:08:06.599731 | orchestrator | 12:08:06.599 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-01 12:08:06.599773 | orchestrator | 12:08:06.599 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.599793 | orchestrator | 12:08:06.599 STDOUT terraform:  } 2025-11-01 12:08:06.599862 | orchestrator | 12:08:06.599 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-11-01 12:08:06.599925 | orchestrator | 12:08:06.599 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-11-01 12:08:06.599959 | orchestrator | 12:08:06.599 STDOUT terraform:  + description = "vrrp" 2025-11-01 12:08:06.599994 | orchestrator | 12:08:06.599 STDOUT terraform:  + direction = "ingress" 2025-11-01 12:08:06.600024 | orchestrator | 12:08:06.600 STDOUT terraform:  + ethertype = "IPv4" 2025-11-01 12:08:06.600073 | orchestrator | 12:08:06.600 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.600120 | orchestrator | 12:08:06.600 STDOUT terraform:  + protocol = "112" 2025-11-01 12:08:06.600165 | orchestrator | 12:08:06.600 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.600211 | orchestrator | 12:08:06.600 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-11-01 12:08:06.600259 | orchestrator | 12:08:06.600 STDOUT terraform:  + remote_group_id = (known after apply) 2025-11-01 12:08:06.600305 | orchestrator | 12:08:06.600 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-11-01 12:08:06.600366 | orchestrator | 12:08:06.600 STDOUT terraform:  + security_group_id = (known after apply) 2025-11-01 12:08:06.600413 | orchestrator | 12:08:06.600 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.600434 | orchestrator | 12:08:06.600 STDOUT terraform:  } 2025-11-01 12:08:06.600495 | orchestrator | 12:08:06.600 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-11-01 12:08:06.600551 | orchestrator | 12:08:06.600 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-11-01 12:08:06.600602 | orchestrator | 12:08:06.600 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 12:08:06.600684 | orchestrator | 12:08:06.600 STDOUT terraform:  + description = "management security group" 2025-11-01 12:08:06.600727 | orchestrator | 12:08:06.600 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.600763 | orchestrator | 12:08:06.600 STDOUT terraform:  + name = "testbed-management" 2025-11-01 12:08:06.600798 | orchestrator | 12:08:06.600 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.600833 | orchestrator | 12:08:06.600 STDOUT terraform:  + stateful = (known after apply) 2025-11-01 12:08:06.600867 | orchestrator | 12:08:06.600 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.600893 | orchestrator | 12:08:06.600 STDOUT terraform:  } 2025-11-01 12:08:06.600968 | orchestrator | 12:08:06.600 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-11-01 12:08:06.601030 | orchestrator | 12:08:06.600 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-11-01 12:08:06.601067 | orchestrator | 12:08:06.601 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 12:08:06.601112 | orchestrator | 12:08:06.601 STDOUT terraform:  + description = "node security group" 2025-11-01 12:08:06.601148 | orchestrator | 12:08:06.601 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.601179 | orchestrator | 12:08:06.601 STDOUT terraform:  + name = "testbed-node" 2025-11-01 12:08:06.601223 | orchestrator | 12:08:06.601 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.601264 | orchestrator | 12:08:06.601 STDOUT terraform:  + stateful = (known after apply) 2025-11-01 12:08:06.601300 | orchestrator | 12:08:06.601 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.601320 | orchestrator | 12:08:06.601 STDOUT terraform:  } 2025-11-01 12:08:06.601383 | orchestrator | 12:08:06.601 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-11-01 12:08:06.601434 | orchestrator | 12:08:06.601 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-11-01 12:08:06.601471 | orchestrator | 12:08:06.601 STDOUT terraform:  + all_tags = (known after apply) 2025-11-01 12:08:06.601534 | orchestrator | 12:08:06.601 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-11-01 12:08:06.601576 | orchestrator | 12:08:06.601 STDOUT terraform:  + dns_nameservers = [ 2025-11-01 12:08:06.601606 | orchestrator | 12:08:06.601 STDOUT terraform:  + "8.8.8.8", 2025-11-01 12:08:06.601629 | orchestrator | 12:08:06.601 STDOUT terraform:  + "9.9.9.9", 2025-11-01 12:08:06.601650 | orchestrator | 12:08:06.601 STDOUT terraform:  ] 2025-11-01 12:08:06.601678 | orchestrator | 12:08:06.601 STDOUT terraform:  + enable_dhcp = true 2025-11-01 12:08:06.601715 | orchestrator | 12:08:06.601 STDOUT terraform:  + gateway_ip = (known after apply) 2025-11-01 12:08:06.601752 | orchestrator | 12:08:06.601 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.601780 | orchestrator | 12:08:06.601 STDOUT terraform:  + ip_version = 4 2025-11-01 12:08:06.601816 | orchestrator | 12:08:06.601 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-11-01 12:08:06.601858 | orchestrator | 12:08:06.601 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-11-01 12:08:06.601917 | orchestrator | 12:08:06.601 STDOUT terraform:  + name = "subnet-testbed-management" 2025-11-01 12:08:06.601957 | orchestrator | 12:08:06.601 STDOUT terraform:  + network_id = (known after apply) 2025-11-01 12:08:06.601984 | orchestrator | 12:08:06.601 STDOUT terraform:  + no_gateway = false 2025-11-01 12:08:06.602035 | orchestrator | 12:08:06.601 STDOUT terraform:  + region = (known after apply) 2025-11-01 12:08:06.602072 | orchestrator | 12:08:06.602 STDOUT terraform:  + service_types = (known after apply) 2025-11-01 12:08:06.602136 | orchestrator | 12:08:06.602 STDOUT terraform:  + tenant_id = (known after apply) 2025-11-01 12:08:06.602168 | orchestrator | 12:08:06.602 STDOUT terraform:  + allocation_pool { 2025-11-01 12:08:06.602201 | orchestrator | 12:08:06.602 STDOUT terraform:  + end = "192.168.31.250" 2025-11-01 12:08:06.602231 | orchestrator | 12:08:06.602 STDOUT terraform:  + start = "192.168.31.200" 2025-11-01 12:08:06.602252 | orchestrator | 12:08:06.602 STDOUT terraform:  } 2025-11-01 12:08:06.602283 | orchestrator | 12:08:06.602 STDOUT terraform:  } 2025-11-01 12:08:06.602320 | orchestrator | 12:08:06.602 STDOUT terraform:  # terraform_data.image will be created 2025-11-01 12:08:06.602386 | orchestrator | 12:08:06.602 STDOUT terraform:  + resource "terraform_data" "image" { 2025-11-01 12:08:06.602426 | orchestrator | 12:08:06.602 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.602456 | orchestrator | 12:08:06.602 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-11-01 12:08:06.602486 | orchestrator | 12:08:06.602 STDOUT terraform:  + output = (known after apply) 2025-11-01 12:08:06.602506 | orchestrator | 12:08:06.602 STDOUT terraform:  } 2025-11-01 12:08:06.602541 | orchestrator | 12:08:06.602 STDOUT terraform:  # terraform_data.image_node will be created 2025-11-01 12:08:06.602591 | orchestrator | 12:08:06.602 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-11-01 12:08:06.602624 | orchestrator | 12:08:06.602 STDOUT terraform:  + id = (known after apply) 2025-11-01 12:08:06.602652 | orchestrator | 12:08:06.602 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-11-01 12:08:06.602693 | orchestrator | 12:08:06.602 STDOUT terraform:  + output = (known after apply) 2025-11-01 12:08:06.602714 | orchestrator | 12:08:06.602 STDOUT terraform:  } 2025-11-01 12:08:06.602755 | orchestrator | 12:08:06.602 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-11-01 12:08:06.602777 | orchestrator | 12:08:06.602 STDOUT terraform: Changes to Outputs: 2025-11-01 12:08:06.602814 | orchestrator | 12:08:06.602 STDOUT terraform:  + manager_address = (sensitive value) 2025-11-01 12:08:06.602855 | orchestrator | 12:08:06.602 STDOUT terraform:  + private_key = (sensitive value) 2025-11-01 12:08:06.722464 | orchestrator | 12:08:06.722 STDOUT terraform: terraform_data.image: Creating... 2025-11-01 12:08:06.722550 | orchestrator | 12:08:06.722 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=30145a33-afd4-0eb5-0a99-23a06f7fe204] 2025-11-01 12:08:06.727242 | orchestrator | 12:08:06.727 STDOUT terraform: terraform_data.image_node: Creating... 2025-11-01 12:08:06.730337 | orchestrator | 12:08:06.730 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-11-01 12:08:06.730447 | orchestrator | 12:08:06.730 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=e9a94e0d-3802-c022-d053-3f0a209d1e56] 2025-11-01 12:08:06.739636 | orchestrator | 12:08:06.739 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-11-01 12:08:06.743579 | orchestrator | 12:08:06.743 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-11-01 12:08:06.744597 | orchestrator | 12:08:06.744 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-11-01 12:08:06.749366 | orchestrator | 12:08:06.749 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-11-01 12:08:06.750950 | orchestrator | 12:08:06.750 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-11-01 12:08:06.758620 | orchestrator | 12:08:06.758 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-11-01 12:08:06.759736 | orchestrator | 12:08:06.759 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-11-01 12:08:06.760745 | orchestrator | 12:08:06.760 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-11-01 12:08:06.764659 | orchestrator | 12:08:06.764 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-11-01 12:08:07.167674 | orchestrator | 12:08:07.167 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-11-01 12:08:07.174120 | orchestrator | 12:08:07.173 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-11-01 12:08:07.180984 | orchestrator | 12:08:07.180 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-11-01 12:08:07.181860 | orchestrator | 12:08:07.181 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-11-01 12:08:07.418790 | orchestrator | 12:08:07.418 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-11-01 12:08:07.423480 | orchestrator | 12:08:07.423 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-11-01 12:08:07.768480 | orchestrator | 12:08:07.768 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=f657b6bb-a2b8-4b16-b569-5471ed71872b] 2025-11-01 12:08:07.773750 | orchestrator | 12:08:07.773 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-11-01 12:08:10.344105 | orchestrator | 12:08:10.343 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=0d74391b-0b8f-495c-a577-c6c4d7ebf805] 2025-11-01 12:08:10.353503 | orchestrator | 12:08:10.353 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-11-01 12:08:10.359479 | orchestrator | 12:08:10.359 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=caf17145-8e33-4113-9dc7-3e1268f339ef] 2025-11-01 12:08:10.370430 | orchestrator | 12:08:10.370 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-11-01 12:08:10.378785 | orchestrator | 12:08:10.378 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=bacac2a1-f096-4371-9863-988edf40b0d8] 2025-11-01 12:08:10.385397 | orchestrator | 12:08:10.385 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-11-01 12:08:10.387532 | orchestrator | 12:08:10.387 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=ce385ad4-e039-43b9-b94b-c72aec6ecf03] 2025-11-01 12:08:10.390775 | orchestrator | 12:08:10.390 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=9a65d6519b504502d0e9ce4ede7000c93ecae1d1] 2025-11-01 12:08:10.391994 | orchestrator | 12:08:10.391 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-11-01 12:08:10.394186 | orchestrator | 12:08:10.394 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=b36ff255-a328-4794-8843-53478b92bf6f] 2025-11-01 12:08:10.397803 | orchestrator | 12:08:10.397 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-11-01 12:08:10.399129 | orchestrator | 12:08:10.399 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-11-01 12:08:10.404608 | orchestrator | 12:08:10.404 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=6d5232a6-49c3-4ba2-8072-69b94c6f6826] 2025-11-01 12:08:10.409219 | orchestrator | 12:08:10.409 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-11-01 12:08:10.447785 | orchestrator | 12:08:10.447 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=7bcb822d-7f3b-4eca-ac83-a3c6a6b727aa] 2025-11-01 12:08:10.458797 | orchestrator | 12:08:10.458 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-11-01 12:08:10.463159 | orchestrator | 12:08:10.462 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=5ce69623-bff4-4254-af6b-7ef1616921db] 2025-11-01 12:08:10.471104 | orchestrator | 12:08:10.470 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=28f9a73c7db0bae03331ab8778bc12bb3d089af5] 2025-11-01 12:08:10.479310 | orchestrator | 12:08:10.479 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-11-01 12:08:10.620835 | orchestrator | 12:08:10.620 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=79b0442c-a1d2-4926-aa81-9c91c373f6dc] 2025-11-01 12:08:11.101144 | orchestrator | 12:08:11.100 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=7df27743-c4c4-4826-83b6-b86405723b2c] 2025-11-01 12:08:11.344501 | orchestrator | 12:08:11.344 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=02568253-9630-4b18-869f-12294837102c] 2025-11-01 12:08:11.603414 | orchestrator | 12:08:11.351 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-11-01 12:08:13.754094 | orchestrator | 12:08:13.753 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=6b738f3f-811f-4b8a-84ab-a2aefc3daf42] 2025-11-01 12:08:13.761629 | orchestrator | 12:08:13.761 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=5acedd85-43bb-4c6d-8618-5f6f37d1f29a] 2025-11-01 12:08:13.769397 | orchestrator | 12:08:13.769 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=80247df9-6714-4677-8c78-bf7fdbf74e7f] 2025-11-01 12:08:13.817880 | orchestrator | 12:08:13.817 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=598994d1-b5fd-49e7-a955-1ee24af64c72] 2025-11-01 12:08:13.818869 | orchestrator | 12:08:13.818 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=c58f6bdd-66dc-4844-a9dc-254d04287c11] 2025-11-01 12:08:13.821529 | orchestrator | 12:08:13.821 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=5c5fea78-3082-490b-be73-37826e0214df] 2025-11-01 12:08:14.357577 | orchestrator | 12:08:14.357 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=a5125c4c-bf92-4e0d-970a-47da64527de4] 2025-11-01 12:08:14.364188 | orchestrator | 12:08:14.363 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-11-01 12:08:14.371926 | orchestrator | 12:08:14.371 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-11-01 12:08:14.372660 | orchestrator | 12:08:14.372 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-11-01 12:08:14.586450 | orchestrator | 12:08:14.586 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=67bc8872-1790-448c-9e67-9b43d5329950] 2025-11-01 12:08:14.599537 | orchestrator | 12:08:14.599 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-11-01 12:08:14.599868 | orchestrator | 12:08:14.599 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-11-01 12:08:14.600597 | orchestrator | 12:08:14.600 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-11-01 12:08:14.600953 | orchestrator | 12:08:14.600 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-11-01 12:08:14.600969 | orchestrator | 12:08:14.600 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-11-01 12:08:14.603255 | orchestrator | 12:08:14.603 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-11-01 12:08:14.613619 | orchestrator | 12:08:14.613 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=61860f34-6a99-48bd-bdc7-d20905df189d] 2025-11-01 12:08:14.626082 | orchestrator | 12:08:14.625 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-11-01 12:08:14.626577 | orchestrator | 12:08:14.626 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-11-01 12:08:14.626965 | orchestrator | 12:08:14.626 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-11-01 12:08:14.772379 | orchestrator | 12:08:14.772 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=205dc0c9-fd66-4ccf-9f91-6b52393b35bd] 2025-11-01 12:08:14.781207 | orchestrator | 12:08:14.780 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-11-01 12:08:14.833631 | orchestrator | 12:08:14.833 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=248cc540-a529-4c2a-b195-b8dce3034cce] 2025-11-01 12:08:14.845669 | orchestrator | 12:08:14.845 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-11-01 12:08:14.992112 | orchestrator | 12:08:14.991 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=97af88c5-5359-40ae-98c5-7c47f6f819e2] 2025-11-01 12:08:15.001304 | orchestrator | 12:08:15.001 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-11-01 12:08:15.067493 | orchestrator | 12:08:15.067 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=91299bee-9296-4311-b204-4fec7a9903da] 2025-11-01 12:08:15.076967 | orchestrator | 12:08:15.076 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-11-01 12:08:15.142986 | orchestrator | 12:08:15.142 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=aa83bf4d-3f1c-46bb-b616-8dc0a3ea77a5] 2025-11-01 12:08:15.154747 | orchestrator | 12:08:15.154 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-11-01 12:08:15.218156 | orchestrator | 12:08:15.217 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=60776a0b-244b-470b-b6c2-555b049a0826] 2025-11-01 12:08:15.229063 | orchestrator | 12:08:15.228 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-11-01 12:08:15.286879 | orchestrator | 12:08:15.286 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=070ad7d5-c61c-41fd-bdea-b245bbfa889a] 2025-11-01 12:08:15.299861 | orchestrator | 12:08:15.299 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=8624ab53-6639-41cb-9e30-e8e1f19f034b] 2025-11-01 12:08:15.303686 | orchestrator | 12:08:15.303 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-11-01 12:08:15.499957 | orchestrator | 12:08:15.499 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=8e8d9cb6-c3d6-4282-839d-86833675ad7d] 2025-11-01 12:08:15.790821 | orchestrator | 12:08:15.790 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=b5d37c41-92ad-4eb0-a647-20e5795350e5] 2025-11-01 12:08:15.807721 | orchestrator | 12:08:15.807 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=29693b07-5d90-4f78-bf36-4db685b92066] 2025-11-01 12:08:15.813847 | orchestrator | 12:08:15.813 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=23198e66-ce83-412a-9bc8-fef2c04e5706] 2025-11-01 12:08:15.854488 | orchestrator | 12:08:15.854 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=5c1abd3f-70e3-4d4e-9903-8acb9bff4da6] 2025-11-01 12:08:15.959920 | orchestrator | 12:08:15.959 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=b50e9c39-8f81-4d27-8ebc-27dea5a807b0] 2025-11-01 12:08:16.357709 | orchestrator | 12:08:16.357 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=76aac4d6-e88d-4978-8e89-71d854760091] 2025-11-01 12:08:16.739657 | orchestrator | 12:08:16.739 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=aeacec4c-3719-4127-a635-d5b42a0bf572] 2025-11-01 12:08:17.195893 | orchestrator | 12:08:17.195 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=ab145f11-6396-4b41-bf04-78be330e24a8] 2025-11-01 12:08:17.204562 | orchestrator | 12:08:17.204 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-11-01 12:08:17.226569 | orchestrator | 12:08:17.226 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-11-01 12:08:17.227041 | orchestrator | 12:08:17.226 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-11-01 12:08:17.232866 | orchestrator | 12:08:17.232 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-11-01 12:08:17.242716 | orchestrator | 12:08:17.242 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-11-01 12:08:17.248172 | orchestrator | 12:08:17.248 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-11-01 12:08:17.250892 | orchestrator | 12:08:17.250 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-11-01 12:08:19.980106 | orchestrator | 12:08:19.979 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=fe39636f-69b5-4097-b949-db02cee14cf1] 2025-11-01 12:08:19.985477 | orchestrator | 12:08:19.985 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-11-01 12:08:19.990709 | orchestrator | 12:08:19.990 STDOUT terraform: local_file.inventory: Creating... 2025-11-01 12:08:19.992108 | orchestrator | 12:08:19.992 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-11-01 12:08:19.999418 | orchestrator | 12:08:19.999 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=81819e6834739a78d6410e96fda5cc8bcfada749] 2025-11-01 12:08:20.000333 | orchestrator | 12:08:20.000 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=52c2e56f981542713c4334c1cb7b5ba45f373cde] 2025-11-01 12:08:20.711150 | orchestrator | 12:08:20.710 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=fe39636f-69b5-4097-b949-db02cee14cf1] 2025-11-01 12:08:27.231150 | orchestrator | 12:08:27.230 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-11-01 12:08:27.234163 | orchestrator | 12:08:27.233 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-11-01 12:08:27.235341 | orchestrator | 12:08:27.235 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-11-01 12:08:27.244461 | orchestrator | 12:08:27.244 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-11-01 12:08:27.249763 | orchestrator | 12:08:27.249 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-11-01 12:08:27.252126 | orchestrator | 12:08:27.251 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-11-01 12:08:37.233871 | orchestrator | 12:08:37.233 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-11-01 12:08:37.235118 | orchestrator | 12:08:37.234 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-11-01 12:08:37.236307 | orchestrator | 12:08:37.236 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-11-01 12:08:37.245539 | orchestrator | 12:08:37.245 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-11-01 12:08:37.250747 | orchestrator | 12:08:37.250 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-11-01 12:08:37.253023 | orchestrator | 12:08:37.252 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-11-01 12:08:37.821610 | orchestrator | 12:08:37.821 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=029eafec-966a-44df-9d99-e6163e12c887] 2025-11-01 12:08:37.829082 | orchestrator | 12:08:37.828 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=d6af5e54-8cac-44b8-a030-2b2738e5f86b] 2025-11-01 12:08:37.898254 | orchestrator | 12:08:37.897 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=c0b27ea4-175f-48d7-a137-4527164a4c94] 2025-11-01 12:08:47.235246 | orchestrator | 12:08:47.234 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-11-01 12:08:47.237295 | orchestrator | 12:08:47.237 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-11-01 12:08:47.253610 | orchestrator | 12:08:47.253 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-11-01 12:08:48.196689 | orchestrator | 12:08:48.196 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=9f0e7fd1-fca2-4552-9690-ba56e5596b6c] 2025-11-01 12:08:48.342730 | orchestrator | 12:08:48.342 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=ca47fb8d-8556-4c93-8ab1-7390a2851787] 2025-11-01 12:08:48.391019 | orchestrator | 12:08:48.390 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=dca175b7-0b35-41ee-b034-9e47f015049b] 2025-11-01 12:08:48.414578 | orchestrator | 12:08:48.414 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-11-01 12:08:48.417101 | orchestrator | 12:08:48.416 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-11-01 12:08:48.425165 | orchestrator | 12:08:48.425 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-11-01 12:08:48.426545 | orchestrator | 12:08:48.426 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-11-01 12:08:48.427638 | orchestrator | 12:08:48.427 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-11-01 12:08:48.428030 | orchestrator | 12:08:48.427 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-11-01 12:08:48.431777 | orchestrator | 12:08:48.431 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=2046180099614111591] 2025-11-01 12:08:48.439727 | orchestrator | 12:08:48.439 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-11-01 12:08:48.440442 | orchestrator | 12:08:48.440 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-11-01 12:08:48.446839 | orchestrator | 12:08:48.446 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-11-01 12:08:48.453749 | orchestrator | 12:08:48.453 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-11-01 12:08:48.468742 | orchestrator | 12:08:48.468 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-11-01 12:08:51.802559 | orchestrator | 12:08:51.802 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=c0b27ea4-175f-48d7-a137-4527164a4c94/ce385ad4-e039-43b9-b94b-c72aec6ecf03] 2025-11-01 12:08:51.822459 | orchestrator | 12:08:51.822 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=029eafec-966a-44df-9d99-e6163e12c887/b36ff255-a328-4794-8843-53478b92bf6f] 2025-11-01 12:08:52.115892 | orchestrator | 12:08:52.115 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=d6af5e54-8cac-44b8-a030-2b2738e5f86b/79b0442c-a1d2-4926-aa81-9c91c373f6dc] 2025-11-01 12:08:52.118276 | orchestrator | 12:08:52.117 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=029eafec-966a-44df-9d99-e6163e12c887/caf17145-8e33-4113-9dc7-3e1268f339ef] 2025-11-01 12:08:52.145851 | orchestrator | 12:08:52.145 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=c0b27ea4-175f-48d7-a137-4527164a4c94/0d74391b-0b8f-495c-a577-c6c4d7ebf805] 2025-11-01 12:08:52.175240 | orchestrator | 12:08:52.174 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=d6af5e54-8cac-44b8-a030-2b2738e5f86b/bacac2a1-f096-4371-9863-988edf40b0d8] 2025-11-01 12:08:58.221337 | orchestrator | 12:08:58.220 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=029eafec-966a-44df-9d99-e6163e12c887/6d5232a6-49c3-4ba2-8072-69b94c6f6826] 2025-11-01 12:08:58.250103 | orchestrator | 12:08:58.249 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=c0b27ea4-175f-48d7-a137-4527164a4c94/5ce69623-bff4-4254-af6b-7ef1616921db] 2025-11-01 12:08:58.277958 | orchestrator | 12:08:58.277 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=d6af5e54-8cac-44b8-a030-2b2738e5f86b/7bcb822d-7f3b-4eca-ac83-a3c6a6b727aa] 2025-11-01 12:08:58.470181 | orchestrator | 12:08:58.469 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-11-01 12:09:08.473285 | orchestrator | 12:09:08.472 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-11-01 12:09:08.928120 | orchestrator | 12:09:08.927 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=f0870e0d-b9d1-4947-a4f2-89613c026386] 2025-11-01 12:09:08.947785 | orchestrator | 12:09:08.947 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-11-01 12:09:08.947838 | orchestrator | 12:09:08.947 STDOUT terraform: Outputs: 2025-11-01 12:09:08.947848 | orchestrator | 12:09:08.947 STDOUT terraform: manager_address = 2025-11-01 12:09:08.947886 | orchestrator | 12:09:08.947 STDOUT terraform: private_key = 2025-11-01 12:09:09.138075 | orchestrator | ok: Runtime: 0:01:07.815642 2025-11-01 12:09:09.170479 | 2025-11-01 12:09:09.170619 | TASK [Create infrastructure (stable)] 2025-11-01 12:09:09.704431 | orchestrator | skipping: Conditional result was False 2025-11-01 12:09:09.723770 | 2025-11-01 12:09:09.724059 | TASK [Fetch manager address] 2025-11-01 12:09:10.140914 | orchestrator | ok 2025-11-01 12:09:10.149165 | 2025-11-01 12:09:10.149287 | TASK [Set manager_host address] 2025-11-01 12:09:10.226439 | orchestrator | ok 2025-11-01 12:09:10.235408 | 2025-11-01 12:09:10.235522 | LOOP [Update ansible collections] 2025-11-01 12:09:11.496260 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-11-01 12:09:11.496635 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-11-01 12:09:11.496695 | orchestrator | Starting galaxy collection install process 2025-11-01 12:09:11.496735 | orchestrator | Process install dependency map 2025-11-01 12:09:11.496770 | orchestrator | Starting collection install process 2025-11-01 12:09:11.496803 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2025-11-01 12:09:11.496844 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2025-11-01 12:09:11.496887 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-11-01 12:09:11.497000 | orchestrator | ok: Item: commons Runtime: 0:00:00.949396 2025-11-01 12:09:12.490273 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-11-01 12:09:12.490478 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-11-01 12:09:12.491109 | orchestrator | Starting galaxy collection install process 2025-11-01 12:09:12.491197 | orchestrator | Process install dependency map 2025-11-01 12:09:12.491259 | orchestrator | Starting collection install process 2025-11-01 12:09:12.491298 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2025-11-01 12:09:12.491334 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2025-11-01 12:09:12.491368 | orchestrator | osism.services:999.0.0 was installed successfully 2025-11-01 12:09:12.491426 | orchestrator | ok: Item: services Runtime: 0:00:00.762484 2025-11-01 12:09:12.514307 | 2025-11-01 12:09:12.514450 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-11-01 12:09:23.009417 | orchestrator | ok 2025-11-01 12:09:23.020706 | 2025-11-01 12:09:23.020831 | TASK [Wait a little longer for the manager so that everything is ready] 2025-11-01 12:10:23.079441 | orchestrator | ok 2025-11-01 12:10:23.091154 | 2025-11-01 12:10:23.091279 | TASK [Fetch manager ssh hostkey] 2025-11-01 12:10:24.662932 | orchestrator | Output suppressed because no_log was given 2025-11-01 12:10:24.670462 | 2025-11-01 12:10:24.670605 | TASK [Get ssh keypair from terraform environment] 2025-11-01 12:10:25.207001 | orchestrator | ok: Runtime: 0:00:00.008003 2025-11-01 12:10:25.217419 | 2025-11-01 12:10:25.217558 | TASK [Point out that the following task takes some time and does not give any output] 2025-11-01 12:10:25.250801 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-11-01 12:10:25.260122 | 2025-11-01 12:10:25.260242 | TASK [Run manager part 0] 2025-11-01 12:10:26.194158 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-11-01 12:10:26.443312 | orchestrator | 2025-11-01 12:10:26.443362 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-11-01 12:10:26.443371 | orchestrator | 2025-11-01 12:10:26.443386 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-11-01 12:10:28.413618 | orchestrator | ok: [testbed-manager] 2025-11-01 12:10:28.413666 | orchestrator | 2025-11-01 12:10:28.413689 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-11-01 12:10:28.413700 | orchestrator | 2025-11-01 12:10:28.413710 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-01 12:10:30.331217 | orchestrator | ok: [testbed-manager] 2025-11-01 12:10:30.331249 | orchestrator | 2025-11-01 12:10:30.331255 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-11-01 12:10:30.933918 | orchestrator | ok: [testbed-manager] 2025-11-01 12:10:30.933986 | orchestrator | 2025-11-01 12:10:30.933996 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-11-01 12:10:30.985668 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:10:30.985717 | orchestrator | 2025-11-01 12:10:30.985730 | orchestrator | TASK [Update package cache] **************************************************** 2025-11-01 12:10:31.010877 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:10:31.010902 | orchestrator | 2025-11-01 12:10:31.010908 | orchestrator | TASK [Install required packages] *********************************************** 2025-11-01 12:10:31.041665 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:10:31.041693 | orchestrator | 2025-11-01 12:10:31.041699 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-11-01 12:10:31.067654 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:10:31.067686 | orchestrator | 2025-11-01 12:10:31.067694 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-11-01 12:10:31.090617 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:10:31.090643 | orchestrator | 2025-11-01 12:10:31.090650 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2025-11-01 12:10:31.113673 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:10:31.113700 | orchestrator | 2025-11-01 12:10:31.113709 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-11-01 12:10:31.137575 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:10:31.137603 | orchestrator | 2025-11-01 12:10:31.137611 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-11-01 12:10:31.857668 | orchestrator | changed: [testbed-manager] 2025-11-01 12:10:31.857708 | orchestrator | 2025-11-01 12:10:31.857716 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-11-01 12:13:05.924594 | orchestrator | changed: [testbed-manager] 2025-11-01 12:13:05.924661 | orchestrator | 2025-11-01 12:13:05.924680 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-11-01 12:14:43.464709 | orchestrator | changed: [testbed-manager] 2025-11-01 12:14:43.464805 | orchestrator | 2025-11-01 12:14:43.464821 | orchestrator | TASK [Install required packages] *********************************************** 2025-11-01 12:15:07.558659 | orchestrator | changed: [testbed-manager] 2025-11-01 12:15:07.558734 | orchestrator | 2025-11-01 12:15:07.558751 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-11-01 12:15:17.105808 | orchestrator | changed: [testbed-manager] 2025-11-01 12:15:17.105875 | orchestrator | 2025-11-01 12:15:17.105890 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-11-01 12:15:17.153485 | orchestrator | ok: [testbed-manager] 2025-11-01 12:15:17.153569 | orchestrator | 2025-11-01 12:15:17.153583 | orchestrator | TASK [Get current user] ******************************************************** 2025-11-01 12:15:17.967525 | orchestrator | ok: [testbed-manager] 2025-11-01 12:15:17.967608 | orchestrator | 2025-11-01 12:15:17.967626 | orchestrator | TASK [Create venv directory] *************************************************** 2025-11-01 12:15:18.686381 | orchestrator | changed: [testbed-manager] 2025-11-01 12:15:18.686419 | orchestrator | 2025-11-01 12:15:18.686428 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-11-01 12:15:25.914841 | orchestrator | changed: [testbed-manager] 2025-11-01 12:15:25.914888 | orchestrator | 2025-11-01 12:15:25.914910 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-11-01 12:15:32.744868 | orchestrator | changed: [testbed-manager] 2025-11-01 12:15:32.744956 | orchestrator | 2025-11-01 12:15:32.744981 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-11-01 12:15:35.743188 | orchestrator | changed: [testbed-manager] 2025-11-01 12:15:35.743257 | orchestrator | 2025-11-01 12:15:35.743274 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-11-01 12:15:37.667570 | orchestrator | changed: [testbed-manager] 2025-11-01 12:15:37.667650 | orchestrator | 2025-11-01 12:15:37.667664 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-11-01 12:15:38.823449 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-11-01 12:15:38.823535 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-11-01 12:15:38.823548 | orchestrator | 2025-11-01 12:15:38.823560 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-11-01 12:15:38.861561 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-11-01 12:15:38.861638 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-11-01 12:15:38.861653 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-11-01 12:15:38.861665 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-11-01 12:15:46.928023 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-11-01 12:15:46.928094 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-11-01 12:15:46.928106 | orchestrator | 2025-11-01 12:15:46.928116 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-11-01 12:15:47.475656 | orchestrator | changed: [testbed-manager] 2025-11-01 12:15:47.475711 | orchestrator | 2025-11-01 12:15:47.475719 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-11-01 12:19:15.890890 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-11-01 12:19:15.890939 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-11-01 12:19:15.890947 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-11-01 12:19:15.890953 | orchestrator | 2025-11-01 12:19:15.890959 | orchestrator | TASK [Install local collections] *********************************************** 2025-11-01 12:19:18.416021 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-11-01 12:19:18.416098 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-11-01 12:19:18.416112 | orchestrator | 2025-11-01 12:19:18.416124 | orchestrator | PLAY [Create operator user] **************************************************** 2025-11-01 12:19:18.416150 | orchestrator | 2025-11-01 12:19:18.416163 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-01 12:19:19.940964 | orchestrator | ok: [testbed-manager] 2025-11-01 12:19:19.941055 | orchestrator | 2025-11-01 12:19:19.941074 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-11-01 12:19:19.984341 | orchestrator | ok: [testbed-manager] 2025-11-01 12:19:19.984395 | orchestrator | 2025-11-01 12:19:19.984410 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-11-01 12:19:20.052831 | orchestrator | ok: [testbed-manager] 2025-11-01 12:19:20.052915 | orchestrator | 2025-11-01 12:19:20.052931 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-11-01 12:19:20.936025 | orchestrator | changed: [testbed-manager] 2025-11-01 12:19:20.936115 | orchestrator | 2025-11-01 12:19:20.936132 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-11-01 12:19:21.682162 | orchestrator | changed: [testbed-manager] 2025-11-01 12:19:21.682252 | orchestrator | 2025-11-01 12:19:21.682270 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-11-01 12:19:23.083273 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-11-01 12:19:23.083355 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-11-01 12:19:23.083371 | orchestrator | 2025-11-01 12:19:23.083400 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-11-01 12:19:24.417015 | orchestrator | changed: [testbed-manager] 2025-11-01 12:19:24.417121 | orchestrator | 2025-11-01 12:19:24.417139 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-11-01 12:19:26.204064 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-11-01 12:19:26.204252 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-11-01 12:19:26.204279 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-11-01 12:19:26.204300 | orchestrator | 2025-11-01 12:19:26.204322 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-11-01 12:19:26.258471 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:19:26.258559 | orchestrator | 2025-11-01 12:19:26.258573 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-11-01 12:19:26.839400 | orchestrator | changed: [testbed-manager] 2025-11-01 12:19:26.839495 | orchestrator | 2025-11-01 12:19:26.839513 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-11-01 12:19:26.907646 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:19:26.907689 | orchestrator | 2025-11-01 12:19:26.907696 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-11-01 12:19:27.806995 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-01 12:19:27.807037 | orchestrator | changed: [testbed-manager] 2025-11-01 12:19:27.807046 | orchestrator | 2025-11-01 12:19:27.807054 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-11-01 12:19:27.842997 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:19:27.843063 | orchestrator | 2025-11-01 12:19:27.843078 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-11-01 12:19:27.872396 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:19:27.872444 | orchestrator | 2025-11-01 12:19:27.872457 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-11-01 12:19:27.902066 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:19:27.902108 | orchestrator | 2025-11-01 12:19:27.902120 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-11-01 12:19:27.961700 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:19:27.961752 | orchestrator | 2025-11-01 12:19:27.961768 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-11-01 12:19:28.677982 | orchestrator | ok: [testbed-manager] 2025-11-01 12:19:28.678096 | orchestrator | 2025-11-01 12:19:28.678112 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-11-01 12:19:28.678125 | orchestrator | 2025-11-01 12:19:28.678136 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-01 12:19:30.143913 | orchestrator | ok: [testbed-manager] 2025-11-01 12:19:30.143993 | orchestrator | 2025-11-01 12:19:30.144009 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-11-01 12:19:31.198861 | orchestrator | changed: [testbed-manager] 2025-11-01 12:19:31.199511 | orchestrator | 2025-11-01 12:19:31.199526 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:19:31.199532 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-11-01 12:19:31.199536 | orchestrator | 2025-11-01 12:19:31.568097 | orchestrator | ok: Runtime: 0:09:05.761263 2025-11-01 12:19:31.585313 | 2025-11-01 12:19:31.585441 | TASK [Point out that the log in on the manager is now possible] 2025-11-01 12:19:31.631918 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-11-01 12:19:31.642081 | 2025-11-01 12:19:31.642203 | TASK [Point out that the following task takes some time and does not give any output] 2025-11-01 12:19:31.673592 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-11-01 12:19:31.681250 | 2025-11-01 12:19:31.681353 | TASK [Run manager part 1 + 2] 2025-11-01 12:19:32.669239 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-11-01 12:19:32.722271 | orchestrator | 2025-11-01 12:19:32.722333 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-11-01 12:19:32.722350 | orchestrator | 2025-11-01 12:19:32.722379 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-01 12:19:35.801265 | orchestrator | ok: [testbed-manager] 2025-11-01 12:19:35.801327 | orchestrator | 2025-11-01 12:19:35.801374 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-11-01 12:19:35.833053 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:19:35.833101 | orchestrator | 2025-11-01 12:19:35.833116 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-11-01 12:19:35.864080 | orchestrator | ok: [testbed-manager] 2025-11-01 12:19:35.864123 | orchestrator | 2025-11-01 12:19:35.864136 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-11-01 12:19:35.899503 | orchestrator | ok: [testbed-manager] 2025-11-01 12:19:35.899552 | orchestrator | 2025-11-01 12:19:35.899568 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-11-01 12:19:35.956775 | orchestrator | ok: [testbed-manager] 2025-11-01 12:19:35.956926 | orchestrator | 2025-11-01 12:19:35.956947 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-11-01 12:19:36.013548 | orchestrator | ok: [testbed-manager] 2025-11-01 12:19:36.013589 | orchestrator | 2025-11-01 12:19:36.013604 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-11-01 12:19:36.062575 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-11-01 12:19:36.062612 | orchestrator | 2025-11-01 12:19:36.062626 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-11-01 12:19:36.771923 | orchestrator | ok: [testbed-manager] 2025-11-01 12:19:36.771972 | orchestrator | 2025-11-01 12:19:36.771989 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-11-01 12:19:36.820327 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:19:36.820377 | orchestrator | 2025-11-01 12:19:36.820391 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-11-01 12:19:38.156676 | orchestrator | changed: [testbed-manager] 2025-11-01 12:19:38.156736 | orchestrator | 2025-11-01 12:19:38.156755 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-11-01 12:19:38.727954 | orchestrator | ok: [testbed-manager] 2025-11-01 12:19:38.728017 | orchestrator | 2025-11-01 12:19:38.728031 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-11-01 12:19:39.854533 | orchestrator | changed: [testbed-manager] 2025-11-01 12:19:39.854590 | orchestrator | 2025-11-01 12:19:39.854605 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-11-01 12:20:01.121424 | orchestrator | changed: [testbed-manager] 2025-11-01 12:20:01.121557 | orchestrator | 2025-11-01 12:20:01.121576 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-11-01 12:20:01.807418 | orchestrator | ok: [testbed-manager] 2025-11-01 12:20:01.807528 | orchestrator | 2025-11-01 12:20:01.807546 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-11-01 12:20:01.862174 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:20:01.862229 | orchestrator | 2025-11-01 12:20:01.862237 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-11-01 12:20:02.816175 | orchestrator | changed: [testbed-manager] 2025-11-01 12:20:02.816257 | orchestrator | 2025-11-01 12:20:02.816272 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-11-01 12:20:03.787715 | orchestrator | changed: [testbed-manager] 2025-11-01 12:20:03.787756 | orchestrator | 2025-11-01 12:20:03.787764 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-11-01 12:20:04.346305 | orchestrator | changed: [testbed-manager] 2025-11-01 12:20:04.346374 | orchestrator | 2025-11-01 12:20:04.346389 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-11-01 12:20:04.383186 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-11-01 12:20:04.383253 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-11-01 12:20:04.383264 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-11-01 12:20:04.383272 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-11-01 12:20:06.150172 | orchestrator | changed: [testbed-manager] 2025-11-01 12:20:06.150208 | orchestrator | 2025-11-01 12:20:06.150217 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-11-01 12:20:16.005813 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-11-01 12:20:16.005903 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-11-01 12:20:16.005921 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-11-01 12:20:16.005933 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-11-01 12:20:16.005951 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-11-01 12:20:16.005962 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-11-01 12:20:16.005973 | orchestrator | 2025-11-01 12:20:16.005986 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-11-01 12:20:17.071807 | orchestrator | changed: [testbed-manager] 2025-11-01 12:20:17.071851 | orchestrator | 2025-11-01 12:20:17.071860 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-11-01 12:20:17.113292 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:20:17.113330 | orchestrator | 2025-11-01 12:20:17.113338 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-11-01 12:20:19.895042 | orchestrator | changed: [testbed-manager] 2025-11-01 12:20:19.895124 | orchestrator | 2025-11-01 12:20:19.895138 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-11-01 12:20:19.934228 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:20:19.934290 | orchestrator | 2025-11-01 12:20:19.934304 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-11-01 12:22:11.607751 | orchestrator | changed: [testbed-manager] 2025-11-01 12:22:11.607860 | orchestrator | 2025-11-01 12:22:11.607880 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-11-01 12:22:12.891186 | orchestrator | ok: [testbed-manager] 2025-11-01 12:22:12.891273 | orchestrator | 2025-11-01 12:22:12.891291 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:22:12.891305 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-11-01 12:22:12.891317 | orchestrator | 2025-11-01 12:22:13.319444 | orchestrator | ok: Runtime: 0:02:41.014049 2025-11-01 12:22:13.337265 | 2025-11-01 12:22:13.337395 | TASK [Reboot manager] 2025-11-01 12:22:14.872416 | orchestrator | ok: Runtime: 0:00:00.985888 2025-11-01 12:22:14.888144 | 2025-11-01 12:22:14.888289 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-11-01 12:22:31.281064 | orchestrator | ok 2025-11-01 12:22:31.288489 | 2025-11-01 12:22:31.288603 | TASK [Wait a little longer for the manager so that everything is ready] 2025-11-01 12:23:31.341813 | orchestrator | ok 2025-11-01 12:23:31.351234 | 2025-11-01 12:23:31.351357 | TASK [Deploy manager + bootstrap nodes] 2025-11-01 12:23:34.102292 | orchestrator | 2025-11-01 12:23:34.102499 | orchestrator | # DEPLOY MANAGER 2025-11-01 12:23:34.102525 | orchestrator | 2025-11-01 12:23:34.102539 | orchestrator | + set -e 2025-11-01 12:23:34.102552 | orchestrator | + echo 2025-11-01 12:23:34.102566 | orchestrator | + echo '# DEPLOY MANAGER' 2025-11-01 12:23:34.102584 | orchestrator | + echo 2025-11-01 12:23:34.102633 | orchestrator | + cat /opt/manager-vars.sh 2025-11-01 12:23:34.105433 | orchestrator | export NUMBER_OF_NODES=6 2025-11-01 12:23:34.105483 | orchestrator | 2025-11-01 12:23:34.105496 | orchestrator | export CEPH_VERSION=reef 2025-11-01 12:23:34.105509 | orchestrator | export CONFIGURATION_VERSION=main 2025-11-01 12:23:34.105523 | orchestrator | export MANAGER_VERSION=latest 2025-11-01 12:23:34.105545 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-11-01 12:23:34.105556 | orchestrator | 2025-11-01 12:23:34.105574 | orchestrator | export ARA=false 2025-11-01 12:23:34.105585 | orchestrator | export DEPLOY_MODE=manager 2025-11-01 12:23:34.105603 | orchestrator | export TEMPEST=false 2025-11-01 12:23:34.105614 | orchestrator | export IS_ZUUL=true 2025-11-01 12:23:34.105626 | orchestrator | 2025-11-01 12:23:34.105643 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-11-01 12:23:34.105655 | orchestrator | export EXTERNAL_API=false 2025-11-01 12:23:34.105666 | orchestrator | 2025-11-01 12:23:34.105676 | orchestrator | export IMAGE_USER=ubuntu 2025-11-01 12:23:34.105690 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-11-01 12:23:34.105701 | orchestrator | 2025-11-01 12:23:34.105711 | orchestrator | export CEPH_STACK=ceph-ansible 2025-11-01 12:23:34.105727 | orchestrator | 2025-11-01 12:23:34.105739 | orchestrator | + echo 2025-11-01 12:23:34.105751 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-01 12:23:34.107424 | orchestrator | ++ export INTERACTIVE=false 2025-11-01 12:23:34.107444 | orchestrator | ++ INTERACTIVE=false 2025-11-01 12:23:34.107486 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-01 12:23:34.107506 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-01 12:23:34.107530 | orchestrator | + source /opt/manager-vars.sh 2025-11-01 12:23:34.107553 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-01 12:23:34.107573 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-01 12:23:34.107588 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-01 12:23:34.107599 | orchestrator | ++ CEPH_VERSION=reef 2025-11-01 12:23:34.107610 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-01 12:23:34.107620 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-01 12:23:34.107715 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-01 12:23:34.107731 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-01 12:23:34.107742 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-01 12:23:34.107761 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-01 12:23:34.107772 | orchestrator | ++ export ARA=false 2025-11-01 12:23:34.107783 | orchestrator | ++ ARA=false 2025-11-01 12:23:34.107794 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-01 12:23:34.107805 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-01 12:23:34.107816 | orchestrator | ++ export TEMPEST=false 2025-11-01 12:23:34.107826 | orchestrator | ++ TEMPEST=false 2025-11-01 12:23:34.107841 | orchestrator | ++ export IS_ZUUL=true 2025-11-01 12:23:34.107852 | orchestrator | ++ IS_ZUUL=true 2025-11-01 12:23:34.107863 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-11-01 12:23:34.107874 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-11-01 12:23:34.107885 | orchestrator | ++ export EXTERNAL_API=false 2025-11-01 12:23:34.107896 | orchestrator | ++ EXTERNAL_API=false 2025-11-01 12:23:34.107907 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-01 12:23:34.107917 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-01 12:23:34.107928 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-01 12:23:34.107939 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-01 12:23:34.107950 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-01 12:23:34.107961 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-01 12:23:34.107975 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-11-01 12:23:34.156034 | orchestrator | + docker version 2025-11-01 12:23:34.468906 | orchestrator | Client: Docker Engine - Community 2025-11-01 12:23:34.468984 | orchestrator | Version: 27.5.1 2025-11-01 12:23:34.468996 | orchestrator | API version: 1.47 2025-11-01 12:23:34.469004 | orchestrator | Go version: go1.22.11 2025-11-01 12:23:34.469012 | orchestrator | Git commit: 9f9e405 2025-11-01 12:23:34.469020 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-11-01 12:23:34.469029 | orchestrator | OS/Arch: linux/amd64 2025-11-01 12:23:34.469036 | orchestrator | Context: default 2025-11-01 12:23:34.469044 | orchestrator | 2025-11-01 12:23:34.469053 | orchestrator | Server: Docker Engine - Community 2025-11-01 12:23:34.469061 | orchestrator | Engine: 2025-11-01 12:23:34.469069 | orchestrator | Version: 27.5.1 2025-11-01 12:23:34.469077 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-11-01 12:23:34.469111 | orchestrator | Go version: go1.22.11 2025-11-01 12:23:34.469119 | orchestrator | Git commit: 4c9b3b0 2025-11-01 12:23:34.469127 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-11-01 12:23:34.469135 | orchestrator | OS/Arch: linux/amd64 2025-11-01 12:23:34.469143 | orchestrator | Experimental: false 2025-11-01 12:23:34.469151 | orchestrator | containerd: 2025-11-01 12:23:34.469169 | orchestrator | Version: v1.7.28 2025-11-01 12:23:34.469178 | orchestrator | GitCommit: b98a3aace656320842a23f4a392a33f46af97866 2025-11-01 12:23:34.469186 | orchestrator | runc: 2025-11-01 12:23:34.469194 | orchestrator | Version: 1.3.0 2025-11-01 12:23:34.469202 | orchestrator | GitCommit: v1.3.0-0-g4ca628d1 2025-11-01 12:23:34.469210 | orchestrator | docker-init: 2025-11-01 12:23:34.469220 | orchestrator | Version: 0.19.0 2025-11-01 12:23:34.469229 | orchestrator | GitCommit: de40ad0 2025-11-01 12:23:34.473972 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-11-01 12:23:34.481867 | orchestrator | + set -e 2025-11-01 12:23:34.481892 | orchestrator | + source /opt/manager-vars.sh 2025-11-01 12:23:34.481902 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-01 12:23:34.481911 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-01 12:23:34.481920 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-01 12:23:34.481929 | orchestrator | ++ CEPH_VERSION=reef 2025-11-01 12:23:34.481937 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-01 12:23:34.481947 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-01 12:23:34.481955 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-01 12:23:34.481964 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-01 12:23:34.481973 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-01 12:23:34.481981 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-01 12:23:34.481990 | orchestrator | ++ export ARA=false 2025-11-01 12:23:34.481998 | orchestrator | ++ ARA=false 2025-11-01 12:23:34.482007 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-01 12:23:34.482044 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-01 12:23:34.482054 | orchestrator | ++ export TEMPEST=false 2025-11-01 12:23:34.482062 | orchestrator | ++ TEMPEST=false 2025-11-01 12:23:34.482071 | orchestrator | ++ export IS_ZUUL=true 2025-11-01 12:23:34.482079 | orchestrator | ++ IS_ZUUL=true 2025-11-01 12:23:34.482088 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-11-01 12:23:34.482097 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-11-01 12:23:34.482106 | orchestrator | ++ export EXTERNAL_API=false 2025-11-01 12:23:34.482114 | orchestrator | ++ EXTERNAL_API=false 2025-11-01 12:23:34.482123 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-01 12:23:34.482131 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-01 12:23:34.482140 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-01 12:23:34.482149 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-01 12:23:34.482158 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-01 12:23:34.482166 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-01 12:23:34.482175 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-01 12:23:34.482183 | orchestrator | ++ export INTERACTIVE=false 2025-11-01 12:23:34.482193 | orchestrator | ++ INTERACTIVE=false 2025-11-01 12:23:34.482201 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-01 12:23:34.482213 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-01 12:23:34.482227 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-01 12:23:34.482236 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-01 12:23:34.482245 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-11-01 12:23:34.489296 | orchestrator | + set -e 2025-11-01 12:23:34.489312 | orchestrator | + VERSION=reef 2025-11-01 12:23:34.490569 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-11-01 12:23:34.497294 | orchestrator | + [[ -n ceph_version: reef ]] 2025-11-01 12:23:34.497310 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-11-01 12:23:34.503809 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-11-01 12:23:34.510354 | orchestrator | + set -e 2025-11-01 12:23:34.510422 | orchestrator | + VERSION=2024.2 2025-11-01 12:23:34.511099 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-11-01 12:23:34.513892 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-11-01 12:23:34.513918 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-11-01 12:23:34.518477 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-11-01 12:23:34.519399 | orchestrator | ++ semver latest 7.0.0 2025-11-01 12:23:34.573211 | orchestrator | + [[ -1 -ge 0 ]] 2025-11-01 12:23:34.573263 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-01 12:23:34.573277 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-11-01 12:23:34.573289 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-11-01 12:23:34.658520 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-11-01 12:23:34.662346 | orchestrator | + source /opt/venv/bin/activate 2025-11-01 12:23:34.663406 | orchestrator | ++ deactivate nondestructive 2025-11-01 12:23:34.663435 | orchestrator | ++ '[' -n '' ']' 2025-11-01 12:23:34.663446 | orchestrator | ++ '[' -n '' ']' 2025-11-01 12:23:34.663486 | orchestrator | ++ hash -r 2025-11-01 12:23:34.663497 | orchestrator | ++ '[' -n '' ']' 2025-11-01 12:23:34.663508 | orchestrator | ++ unset VIRTUAL_ENV 2025-11-01 12:23:34.663519 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-11-01 12:23:34.663530 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-11-01 12:23:34.663542 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-11-01 12:23:34.663552 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-11-01 12:23:34.663565 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-11-01 12:23:34.663575 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-11-01 12:23:34.663587 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-11-01 12:23:34.663598 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-11-01 12:23:34.663609 | orchestrator | ++ export PATH 2025-11-01 12:23:34.663619 | orchestrator | ++ '[' -n '' ']' 2025-11-01 12:23:34.663630 | orchestrator | ++ '[' -z '' ']' 2025-11-01 12:23:34.663640 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-11-01 12:23:34.663651 | orchestrator | ++ PS1='(venv) ' 2025-11-01 12:23:34.663661 | orchestrator | ++ export PS1 2025-11-01 12:23:34.663672 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-11-01 12:23:34.663683 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-11-01 12:23:34.663693 | orchestrator | ++ hash -r 2025-11-01 12:23:34.663721 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-11-01 12:23:36.247487 | orchestrator | 2025-11-01 12:23:36.247597 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-11-01 12:23:36.247673 | orchestrator | 2025-11-01 12:23:36.247687 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-11-01 12:23:36.859715 | orchestrator | ok: [testbed-manager] 2025-11-01 12:23:36.859802 | orchestrator | 2025-11-01 12:23:36.859813 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-11-01 12:23:37.947055 | orchestrator | changed: [testbed-manager] 2025-11-01 12:23:37.947162 | orchestrator | 2025-11-01 12:23:37.947179 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-11-01 12:23:37.947192 | orchestrator | 2025-11-01 12:23:37.947203 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-01 12:23:40.506122 | orchestrator | ok: [testbed-manager] 2025-11-01 12:23:40.506211 | orchestrator | 2025-11-01 12:23:40.506221 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-11-01 12:23:40.549815 | orchestrator | ok: [testbed-manager] 2025-11-01 12:23:40.549834 | orchestrator | 2025-11-01 12:23:40.549843 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-11-01 12:23:41.049708 | orchestrator | changed: [testbed-manager] 2025-11-01 12:23:41.049756 | orchestrator | 2025-11-01 12:23:41.049767 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-11-01 12:23:41.091116 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:23:41.091136 | orchestrator | 2025-11-01 12:23:41.091150 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-11-01 12:23:41.462774 | orchestrator | changed: [testbed-manager] 2025-11-01 12:23:41.462834 | orchestrator | 2025-11-01 12:23:41.462846 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-11-01 12:23:41.509244 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:23:41.509337 | orchestrator | 2025-11-01 12:23:41.509353 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-11-01 12:23:41.869896 | orchestrator | ok: [testbed-manager] 2025-11-01 12:23:41.869973 | orchestrator | 2025-11-01 12:23:41.869987 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-11-01 12:23:41.996440 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:23:41.996529 | orchestrator | 2025-11-01 12:23:41.996542 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-11-01 12:23:41.996553 | orchestrator | 2025-11-01 12:23:41.996567 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-01 12:23:43.824025 | orchestrator | ok: [testbed-manager] 2025-11-01 12:23:43.824108 | orchestrator | 2025-11-01 12:23:43.824121 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-11-01 12:23:43.951233 | orchestrator | included: osism.services.traefik for testbed-manager 2025-11-01 12:23:43.951290 | orchestrator | 2025-11-01 12:23:43.951303 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-11-01 12:23:44.025999 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-11-01 12:23:44.026101 | orchestrator | 2025-11-01 12:23:44.026114 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-11-01 12:23:45.207971 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-11-01 12:23:45.208052 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-11-01 12:23:45.208065 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-11-01 12:23:45.208076 | orchestrator | 2025-11-01 12:23:45.208090 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-11-01 12:23:47.165519 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-11-01 12:23:47.165617 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-11-01 12:23:47.165631 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-11-01 12:23:47.165641 | orchestrator | 2025-11-01 12:23:47.165650 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-11-01 12:23:47.866821 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-01 12:23:47.866895 | orchestrator | changed: [testbed-manager] 2025-11-01 12:23:47.866905 | orchestrator | 2025-11-01 12:23:47.866914 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-11-01 12:23:48.621822 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-01 12:23:48.621890 | orchestrator | changed: [testbed-manager] 2025-11-01 12:23:48.621900 | orchestrator | 2025-11-01 12:23:48.621909 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-11-01 12:23:48.673360 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:23:48.673379 | orchestrator | 2025-11-01 12:23:48.673388 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-11-01 12:23:49.050528 | orchestrator | ok: [testbed-manager] 2025-11-01 12:23:49.050627 | orchestrator | 2025-11-01 12:23:49.050642 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-11-01 12:23:49.128935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-11-01 12:23:49.128965 | orchestrator | 2025-11-01 12:23:49.128977 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-11-01 12:23:50.330781 | orchestrator | changed: [testbed-manager] 2025-11-01 12:23:50.330865 | orchestrator | 2025-11-01 12:23:50.330877 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-11-01 12:23:51.224653 | orchestrator | changed: [testbed-manager] 2025-11-01 12:23:51.224749 | orchestrator | 2025-11-01 12:23:51.224765 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-11-01 12:24:00.816024 | orchestrator | changed: [testbed-manager] 2025-11-01 12:24:00.816128 | orchestrator | 2025-11-01 12:24:00.816145 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-11-01 12:24:00.885706 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:24:00.885773 | orchestrator | 2025-11-01 12:24:00.885788 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-11-01 12:24:00.885800 | orchestrator | 2025-11-01 12:24:00.885812 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-01 12:24:02.869817 | orchestrator | ok: [testbed-manager] 2025-11-01 12:24:02.869919 | orchestrator | 2025-11-01 12:24:02.869961 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-11-01 12:24:03.008740 | orchestrator | included: osism.services.manager for testbed-manager 2025-11-01 12:24:03.008803 | orchestrator | 2025-11-01 12:24:03.008817 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-11-01 12:24:03.079216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-11-01 12:24:03.079269 | orchestrator | 2025-11-01 12:24:03.079283 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-11-01 12:24:06.026974 | orchestrator | ok: [testbed-manager] 2025-11-01 12:24:06.027774 | orchestrator | 2025-11-01 12:24:06.027807 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-11-01 12:24:06.078678 | orchestrator | ok: [testbed-manager] 2025-11-01 12:24:06.078747 | orchestrator | 2025-11-01 12:24:06.078764 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-11-01 12:24:06.223174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-11-01 12:24:06.223253 | orchestrator | 2025-11-01 12:24:06.223268 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-11-01 12:24:09.274678 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-11-01 12:24:09.274785 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-11-01 12:24:09.274799 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-11-01 12:24:09.274812 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-11-01 12:24:09.274824 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-11-01 12:24:09.274836 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-11-01 12:24:09.274847 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-11-01 12:24:09.274858 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-11-01 12:24:09.274870 | orchestrator | 2025-11-01 12:24:09.274882 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-11-01 12:24:09.978249 | orchestrator | changed: [testbed-manager] 2025-11-01 12:24:09.978326 | orchestrator | 2025-11-01 12:24:09.978342 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-11-01 12:24:10.678441 | orchestrator | changed: [testbed-manager] 2025-11-01 12:24:10.678572 | orchestrator | 2025-11-01 12:24:10.678586 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-11-01 12:24:10.765644 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-11-01 12:24:10.765693 | orchestrator | 2025-11-01 12:24:10.765708 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-11-01 12:24:12.183824 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-11-01 12:24:12.183909 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-11-01 12:24:12.183923 | orchestrator | 2025-11-01 12:24:12.183935 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-11-01 12:24:12.881838 | orchestrator | changed: [testbed-manager] 2025-11-01 12:24:12.881914 | orchestrator | 2025-11-01 12:24:12.881927 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-11-01 12:24:12.926636 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:24:12.926678 | orchestrator | 2025-11-01 12:24:12.926690 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-11-01 12:24:12.993254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-11-01 12:24:12.993284 | orchestrator | 2025-11-01 12:24:12.993296 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-11-01 12:24:13.694127 | orchestrator | changed: [testbed-manager] 2025-11-01 12:24:13.694187 | orchestrator | 2025-11-01 12:24:13.694199 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-11-01 12:24:13.752066 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-11-01 12:24:13.752137 | orchestrator | 2025-11-01 12:24:13.752150 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-11-01 12:24:15.344239 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-01 12:24:15.344330 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-01 12:24:15.344345 | orchestrator | changed: [testbed-manager] 2025-11-01 12:24:15.344358 | orchestrator | 2025-11-01 12:24:15.344369 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-11-01 12:24:16.093132 | orchestrator | changed: [testbed-manager] 2025-11-01 12:24:16.093218 | orchestrator | 2025-11-01 12:24:16.093233 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-11-01 12:24:16.144915 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:24:16.144945 | orchestrator | 2025-11-01 12:24:16.144957 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-11-01 12:24:16.243663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-11-01 12:24:16.243695 | orchestrator | 2025-11-01 12:24:16.243707 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-11-01 12:24:16.801042 | orchestrator | changed: [testbed-manager] 2025-11-01 12:24:16.801107 | orchestrator | 2025-11-01 12:24:16.801119 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-11-01 12:24:17.236333 | orchestrator | changed: [testbed-manager] 2025-11-01 12:24:17.237052 | orchestrator | 2025-11-01 12:24:17.237080 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-11-01 12:24:18.627927 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-11-01 12:24:18.628009 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-11-01 12:24:18.628022 | orchestrator | 2025-11-01 12:24:18.628033 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-11-01 12:24:19.298930 | orchestrator | changed: [testbed-manager] 2025-11-01 12:24:19.299013 | orchestrator | 2025-11-01 12:24:19.299027 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-11-01 12:24:19.717094 | orchestrator | ok: [testbed-manager] 2025-11-01 12:24:19.717169 | orchestrator | 2025-11-01 12:24:19.717183 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-11-01 12:24:20.126331 | orchestrator | changed: [testbed-manager] 2025-11-01 12:24:20.126406 | orchestrator | 2025-11-01 12:24:20.126420 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-11-01 12:24:20.177910 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:24:20.177941 | orchestrator | 2025-11-01 12:24:20.177953 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-11-01 12:24:20.262385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-11-01 12:24:20.262438 | orchestrator | 2025-11-01 12:24:20.262498 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-11-01 12:24:20.306184 | orchestrator | ok: [testbed-manager] 2025-11-01 12:24:20.306210 | orchestrator | 2025-11-01 12:24:20.306222 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-11-01 12:24:22.534217 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-11-01 12:24:22.534322 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-11-01 12:24:22.534339 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-11-01 12:24:22.534351 | orchestrator | 2025-11-01 12:24:22.534364 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-11-01 12:24:23.330279 | orchestrator | changed: [testbed-manager] 2025-11-01 12:24:23.330362 | orchestrator | 2025-11-01 12:24:23.330379 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-11-01 12:24:24.104307 | orchestrator | changed: [testbed-manager] 2025-11-01 12:24:24.104394 | orchestrator | 2025-11-01 12:24:24.104406 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-11-01 12:24:24.844517 | orchestrator | changed: [testbed-manager] 2025-11-01 12:24:24.844624 | orchestrator | 2025-11-01 12:24:24.844650 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-11-01 12:24:24.925766 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-11-01 12:24:24.925833 | orchestrator | 2025-11-01 12:24:24.925847 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-11-01 12:24:24.975689 | orchestrator | ok: [testbed-manager] 2025-11-01 12:24:24.975723 | orchestrator | 2025-11-01 12:24:24.975735 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-11-01 12:24:25.739022 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-11-01 12:24:25.739112 | orchestrator | 2025-11-01 12:24:25.739127 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-11-01 12:24:25.833125 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-11-01 12:24:25.833162 | orchestrator | 2025-11-01 12:24:25.833178 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-11-01 12:24:26.596246 | orchestrator | changed: [testbed-manager] 2025-11-01 12:24:26.596327 | orchestrator | 2025-11-01 12:24:26.596341 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-11-01 12:24:27.224989 | orchestrator | ok: [testbed-manager] 2025-11-01 12:24:27.225076 | orchestrator | 2025-11-01 12:24:27.225091 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-11-01 12:24:27.270302 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:24:27.270329 | orchestrator | 2025-11-01 12:24:27.270341 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-11-01 12:24:27.319656 | orchestrator | ok: [testbed-manager] 2025-11-01 12:24:27.319698 | orchestrator | 2025-11-01 12:24:27.319711 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-11-01 12:24:28.207193 | orchestrator | changed: [testbed-manager] 2025-11-01 12:24:28.207303 | orchestrator | 2025-11-01 12:24:28.207320 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-11-01 12:25:35.002652 | orchestrator | changed: [testbed-manager] 2025-11-01 12:25:35.002768 | orchestrator | 2025-11-01 12:25:35.002785 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-11-01 12:25:36.122340 | orchestrator | ok: [testbed-manager] 2025-11-01 12:25:36.122438 | orchestrator | 2025-11-01 12:25:36.122481 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-11-01 12:25:36.235904 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:25:36.235957 | orchestrator | 2025-11-01 12:25:36.235972 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-11-01 12:25:39.045376 | orchestrator | changed: [testbed-manager] 2025-11-01 12:25:39.045501 | orchestrator | 2025-11-01 12:25:39.045516 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-11-01 12:25:39.122695 | orchestrator | ok: [testbed-manager] 2025-11-01 12:25:39.122755 | orchestrator | 2025-11-01 12:25:39.122764 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-11-01 12:25:39.122771 | orchestrator | 2025-11-01 12:25:39.122777 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-11-01 12:25:39.184116 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:25:39.184209 | orchestrator | 2025-11-01 12:25:39.184227 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-11-01 12:26:39.237298 | orchestrator | Pausing for 60 seconds 2025-11-01 12:26:39.237403 | orchestrator | changed: [testbed-manager] 2025-11-01 12:26:39.237417 | orchestrator | 2025-11-01 12:26:39.237429 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-11-01 12:26:42.435035 | orchestrator | changed: [testbed-manager] 2025-11-01 12:26:42.435134 | orchestrator | 2025-11-01 12:26:42.435151 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-11-01 12:27:44.825140 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-11-01 12:27:44.825931 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-11-01 12:27:44.825953 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2025-11-01 12:27:44.825979 | orchestrator | changed: [testbed-manager] 2025-11-01 12:27:44.825989 | orchestrator | 2025-11-01 12:27:44.825998 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-11-01 12:27:57.075327 | orchestrator | changed: [testbed-manager] 2025-11-01 12:27:57.075499 | orchestrator | 2025-11-01 12:27:57.075519 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-11-01 12:27:57.171539 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-11-01 12:27:57.171611 | orchestrator | 2025-11-01 12:27:57.171626 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-11-01 12:27:57.171639 | orchestrator | 2025-11-01 12:27:57.171650 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-11-01 12:27:57.226387 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:27:57.226421 | orchestrator | 2025-11-01 12:27:57.226480 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2025-11-01 12:27:57.314392 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2025-11-01 12:27:57.314417 | orchestrator | 2025-11-01 12:27:57.314429 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2025-11-01 12:27:58.166724 | orchestrator | changed: [testbed-manager] 2025-11-01 12:27:58.166804 | orchestrator | 2025-11-01 12:27:58.166816 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2025-11-01 12:28:01.850975 | orchestrator | ok: [testbed-manager] 2025-11-01 12:28:01.851068 | orchestrator | 2025-11-01 12:28:01.851105 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2025-11-01 12:28:01.927536 | orchestrator | ok: [testbed-manager] => { 2025-11-01 12:28:01.927587 | orchestrator | "version_check_result.stdout_lines": [ 2025-11-01 12:28:01.927602 | orchestrator | "=== OSISM Container Version Check ===", 2025-11-01 12:28:01.927614 | orchestrator | "Checking running containers against expected versions...", 2025-11-01 12:28:01.927627 | orchestrator | "", 2025-11-01 12:28:01.927639 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2025-11-01 12:28:01.927650 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2025-11-01 12:28:01.927661 | orchestrator | " Enabled: true", 2025-11-01 12:28:01.927672 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2025-11-01 12:28:01.927684 | orchestrator | " Status: ✅ MATCH", 2025-11-01 12:28:01.927695 | orchestrator | "", 2025-11-01 12:28:01.927706 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2025-11-01 12:28:01.927717 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2025-11-01 12:28:01.927728 | orchestrator | " Enabled: true", 2025-11-01 12:28:01.927739 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2025-11-01 12:28:01.927749 | orchestrator | " Status: ✅ MATCH", 2025-11-01 12:28:01.927760 | orchestrator | "", 2025-11-01 12:28:01.927771 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2025-11-01 12:28:01.927782 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2025-11-01 12:28:01.927793 | orchestrator | " Enabled: true", 2025-11-01 12:28:01.927804 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2025-11-01 12:28:01.927815 | orchestrator | " Status: ✅ MATCH", 2025-11-01 12:28:01.927826 | orchestrator | "", 2025-11-01 12:28:01.927836 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2025-11-01 12:28:01.927847 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2025-11-01 12:28:01.927859 | orchestrator | " Enabled: true", 2025-11-01 12:28:01.927870 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2025-11-01 12:28:01.927881 | orchestrator | " Status: ✅ MATCH", 2025-11-01 12:28:01.927892 | orchestrator | "", 2025-11-01 12:28:01.927903 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2025-11-01 12:28:01.927937 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-11-01 12:28:01.927949 | orchestrator | " Enabled: true", 2025-11-01 12:28:01.927960 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-11-01 12:28:01.927970 | orchestrator | " Status: ✅ MATCH", 2025-11-01 12:28:01.927981 | orchestrator | "", 2025-11-01 12:28:01.927992 | orchestrator | "Checking service: osismclient (OSISM Client)", 2025-11-01 12:28:01.928003 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-01 12:28:01.928014 | orchestrator | " Enabled: true", 2025-11-01 12:28:01.928025 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-01 12:28:01.928036 | orchestrator | " Status: ✅ MATCH", 2025-11-01 12:28:01.928047 | orchestrator | "", 2025-11-01 12:28:01.928058 | orchestrator | "Checking service: ara-server (ARA Server)", 2025-11-01 12:28:01.928069 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2025-11-01 12:28:01.928083 | orchestrator | " Enabled: true", 2025-11-01 12:28:01.928096 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2025-11-01 12:28:01.928108 | orchestrator | " Status: ✅ MATCH", 2025-11-01 12:28:01.928122 | orchestrator | "", 2025-11-01 12:28:01.928142 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2025-11-01 12:28:01.928155 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-11-01 12:28:01.928167 | orchestrator | " Enabled: true", 2025-11-01 12:28:01.928180 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-11-01 12:28:01.928193 | orchestrator | " Status: ✅ MATCH", 2025-11-01 12:28:01.928205 | orchestrator | "", 2025-11-01 12:28:01.928218 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2025-11-01 12:28:01.928231 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2025-11-01 12:28:01.928248 | orchestrator | " Enabled: true", 2025-11-01 12:28:01.928261 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2025-11-01 12:28:01.928273 | orchestrator | " Status: ✅ MATCH", 2025-11-01 12:28:01.928286 | orchestrator | "", 2025-11-01 12:28:01.928298 | orchestrator | "Checking service: redis (Redis Cache)", 2025-11-01 12:28:01.928311 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-11-01 12:28:01.928323 | orchestrator | " Enabled: true", 2025-11-01 12:28:01.928336 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-11-01 12:28:01.928349 | orchestrator | " Status: ✅ MATCH", 2025-11-01 12:28:01.928361 | orchestrator | "", 2025-11-01 12:28:01.928374 | orchestrator | "Checking service: api (OSISM API Service)", 2025-11-01 12:28:01.928386 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-01 12:28:01.928399 | orchestrator | " Enabled: true", 2025-11-01 12:28:01.928412 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-01 12:28:01.928424 | orchestrator | " Status: ✅ MATCH", 2025-11-01 12:28:01.928456 | orchestrator | "", 2025-11-01 12:28:01.928468 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2025-11-01 12:28:01.928479 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-01 12:28:01.928490 | orchestrator | " Enabled: true", 2025-11-01 12:28:01.928501 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-01 12:28:01.928512 | orchestrator | " Status: ✅ MATCH", 2025-11-01 12:28:01.928523 | orchestrator | "", 2025-11-01 12:28:01.928533 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2025-11-01 12:28:01.928544 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-01 12:28:01.928555 | orchestrator | " Enabled: true", 2025-11-01 12:28:01.928566 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-01 12:28:01.928577 | orchestrator | " Status: ✅ MATCH", 2025-11-01 12:28:01.928588 | orchestrator | "", 2025-11-01 12:28:01.928598 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2025-11-01 12:28:01.928609 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-01 12:28:01.928620 | orchestrator | " Enabled: true", 2025-11-01 12:28:01.928638 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-01 12:28:01.928650 | orchestrator | " Status: ✅ MATCH", 2025-11-01 12:28:01.928660 | orchestrator | "", 2025-11-01 12:28:01.928671 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2025-11-01 12:28:01.928696 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-11-01 12:28:01.928708 | orchestrator | " Enabled: true", 2025-11-01 12:28:01.928718 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-11-01 12:28:01.928729 | orchestrator | " Status: ✅ MATCH", 2025-11-01 12:28:01.928740 | orchestrator | "", 2025-11-01 12:28:01.928751 | orchestrator | "=== Summary ===", 2025-11-01 12:28:01.928762 | orchestrator | "Errors (version mismatches): 0", 2025-11-01 12:28:01.928773 | orchestrator | "Warnings (expected containers not running): 0", 2025-11-01 12:28:01.928784 | orchestrator | "", 2025-11-01 12:28:01.928795 | orchestrator | "✅ All running containers match expected versions!" 2025-11-01 12:28:01.928806 | orchestrator | ] 2025-11-01 12:28:01.928817 | orchestrator | } 2025-11-01 12:28:01.928829 | orchestrator | 2025-11-01 12:28:01.928840 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2025-11-01 12:28:01.995141 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:28:01.995191 | orchestrator | 2025-11-01 12:28:01.995205 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:28:01.995217 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-11-01 12:28:01.995229 | orchestrator | 2025-11-01 12:28:02.139142 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-11-01 12:28:02.139921 | orchestrator | + deactivate 2025-11-01 12:28:02.139948 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-11-01 12:28:02.139963 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-11-01 12:28:02.139976 | orchestrator | + export PATH 2025-11-01 12:28:02.139988 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-11-01 12:28:02.140001 | orchestrator | + '[' -n '' ']' 2025-11-01 12:28:02.140014 | orchestrator | + hash -r 2025-11-01 12:28:02.140027 | orchestrator | + '[' -n '' ']' 2025-11-01 12:28:02.140040 | orchestrator | + unset VIRTUAL_ENV 2025-11-01 12:28:02.140052 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-11-01 12:28:02.140064 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-11-01 12:28:02.140077 | orchestrator | + unset -f deactivate 2025-11-01 12:28:02.140089 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-11-01 12:28:02.147369 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-11-01 12:28:02.147396 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-11-01 12:28:02.147407 | orchestrator | + local max_attempts=60 2025-11-01 12:28:02.147419 | orchestrator | + local name=ceph-ansible 2025-11-01 12:28:02.147459 | orchestrator | + local attempt_num=1 2025-11-01 12:28:02.148560 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 12:28:02.188572 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-01 12:28:02.188602 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-11-01 12:28:02.188613 | orchestrator | + local max_attempts=60 2025-11-01 12:28:02.188624 | orchestrator | + local name=kolla-ansible 2025-11-01 12:28:02.188635 | orchestrator | + local attempt_num=1 2025-11-01 12:28:02.189604 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-11-01 12:28:02.222876 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-01 12:28:02.222931 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-11-01 12:28:02.222946 | orchestrator | + local max_attempts=60 2025-11-01 12:28:02.222958 | orchestrator | + local name=osism-ansible 2025-11-01 12:28:02.222969 | orchestrator | + local attempt_num=1 2025-11-01 12:28:02.222980 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-11-01 12:28:02.255762 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-01 12:28:02.255813 | orchestrator | + [[ true == \t\r\u\e ]] 2025-11-01 12:28:02.255826 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-11-01 12:28:02.958960 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-11-01 12:28:03.127830 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-11-01 12:28:03.127927 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2025-11-01 12:28:03.127941 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2025-11-01 12:28:03.127953 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-11-01 12:28:03.127966 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2025-11-01 12:28:03.127977 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2025-11-01 12:28:03.128006 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2025-11-01 12:28:03.128018 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2025-11-01 12:28:03.128028 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2025-11-01 12:28:03.128039 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2025-11-01 12:28:03.128050 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2025-11-01 12:28:03.128061 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2025-11-01 12:28:03.128071 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2025-11-01 12:28:03.128082 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2025-11-01 12:28:03.128093 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2025-11-01 12:28:03.128104 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2025-11-01 12:28:03.136427 | orchestrator | ++ semver latest 7.0.0 2025-11-01 12:28:03.191374 | orchestrator | + [[ -1 -ge 0 ]] 2025-11-01 12:28:03.191399 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-01 12:28:03.191412 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-11-01 12:28:03.196230 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-11-01 12:28:15.646005 | orchestrator | 2025-11-01 12:28:15 | INFO  | Task 3074f3a1-2437-4e66-a6ab-ec2f1a320f64 (resolvconf) was prepared for execution. 2025-11-01 12:28:15.646170 | orchestrator | 2025-11-01 12:28:15 | INFO  | It takes a moment until task 3074f3a1-2437-4e66-a6ab-ec2f1a320f64 (resolvconf) has been started and output is visible here. 2025-11-01 12:28:32.180957 | orchestrator | 2025-11-01 12:28:32.181053 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-11-01 12:28:32.181068 | orchestrator | 2025-11-01 12:28:32.181078 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-01 12:28:32.181088 | orchestrator | Saturday 01 November 2025 12:28:20 +0000 (0:00:00.174) 0:00:00.174 ***** 2025-11-01 12:28:32.181098 | orchestrator | ok: [testbed-manager] 2025-11-01 12:28:32.181109 | orchestrator | 2025-11-01 12:28:32.181119 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-11-01 12:28:32.181129 | orchestrator | Saturday 01 November 2025 12:28:25 +0000 (0:00:05.074) 0:00:05.249 ***** 2025-11-01 12:28:32.181139 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:28:32.181149 | orchestrator | 2025-11-01 12:28:32.181158 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-11-01 12:28:32.181168 | orchestrator | Saturday 01 November 2025 12:28:25 +0000 (0:00:00.073) 0:00:05.322 ***** 2025-11-01 12:28:32.181178 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-11-01 12:28:32.181188 | orchestrator | 2025-11-01 12:28:32.181207 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-11-01 12:28:32.181217 | orchestrator | Saturday 01 November 2025 12:28:25 +0000 (0:00:00.092) 0:00:05.414 ***** 2025-11-01 12:28:32.181227 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-11-01 12:28:32.181237 | orchestrator | 2025-11-01 12:28:32.181247 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-11-01 12:28:32.181256 | orchestrator | Saturday 01 November 2025 12:28:25 +0000 (0:00:00.076) 0:00:05.490 ***** 2025-11-01 12:28:32.181266 | orchestrator | ok: [testbed-manager] 2025-11-01 12:28:32.181276 | orchestrator | 2025-11-01 12:28:32.181285 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-11-01 12:28:32.181295 | orchestrator | Saturday 01 November 2025 12:28:27 +0000 (0:00:01.277) 0:00:06.767 ***** 2025-11-01 12:28:32.181305 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:28:32.181315 | orchestrator | 2025-11-01 12:28:32.181324 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-11-01 12:28:32.181334 | orchestrator | Saturday 01 November 2025 12:28:27 +0000 (0:00:00.063) 0:00:06.831 ***** 2025-11-01 12:28:32.181344 | orchestrator | ok: [testbed-manager] 2025-11-01 12:28:32.181353 | orchestrator | 2025-11-01 12:28:32.181363 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-11-01 12:28:32.181372 | orchestrator | Saturday 01 November 2025 12:28:27 +0000 (0:00:00.543) 0:00:07.374 ***** 2025-11-01 12:28:32.181382 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:28:32.181391 | orchestrator | 2025-11-01 12:28:32.181401 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-11-01 12:28:32.181412 | orchestrator | Saturday 01 November 2025 12:28:27 +0000 (0:00:00.089) 0:00:07.464 ***** 2025-11-01 12:28:32.181421 | orchestrator | changed: [testbed-manager] 2025-11-01 12:28:32.181478 | orchestrator | 2025-11-01 12:28:32.181489 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-11-01 12:28:32.181499 | orchestrator | Saturday 01 November 2025 12:28:28 +0000 (0:00:00.610) 0:00:08.074 ***** 2025-11-01 12:28:32.181510 | orchestrator | changed: [testbed-manager] 2025-11-01 12:28:32.181520 | orchestrator | 2025-11-01 12:28:32.181531 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-11-01 12:28:32.181542 | orchestrator | Saturday 01 November 2025 12:28:29 +0000 (0:00:01.191) 0:00:09.266 ***** 2025-11-01 12:28:32.181552 | orchestrator | ok: [testbed-manager] 2025-11-01 12:28:32.181563 | orchestrator | 2025-11-01 12:28:32.181574 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-11-01 12:28:32.181603 | orchestrator | Saturday 01 November 2025 12:28:30 +0000 (0:00:01.073) 0:00:10.340 ***** 2025-11-01 12:28:32.181615 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-11-01 12:28:32.181626 | orchestrator | 2025-11-01 12:28:32.181637 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-11-01 12:28:32.181647 | orchestrator | Saturday 01 November 2025 12:28:30 +0000 (0:00:00.080) 0:00:10.420 ***** 2025-11-01 12:28:32.181658 | orchestrator | changed: [testbed-manager] 2025-11-01 12:28:32.181668 | orchestrator | 2025-11-01 12:28:32.181679 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:28:32.181691 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-01 12:28:32.181702 | orchestrator | 2025-11-01 12:28:32.181713 | orchestrator | 2025-11-01 12:28:32.181724 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:28:32.181735 | orchestrator | Saturday 01 November 2025 12:28:31 +0000 (0:00:01.237) 0:00:11.657 ***** 2025-11-01 12:28:32.181746 | orchestrator | =============================================================================== 2025-11-01 12:28:32.181756 | orchestrator | Gathering Facts --------------------------------------------------------- 5.07s 2025-11-01 12:28:32.181767 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.28s 2025-11-01 12:28:32.181778 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.24s 2025-11-01 12:28:32.181788 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.19s 2025-11-01 12:28:32.181799 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.07s 2025-11-01 12:28:32.181810 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.61s 2025-11-01 12:28:32.181837 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.54s 2025-11-01 12:28:32.181849 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-11-01 12:28:32.181861 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-11-01 12:28:32.181871 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-11-01 12:28:32.181886 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-11-01 12:28:32.181896 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-11-01 12:28:32.181905 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-11-01 12:28:32.557177 | orchestrator | + osism apply sshconfig 2025-11-01 12:28:44.936542 | orchestrator | 2025-11-01 12:28:44 | INFO  | Task a3d931fa-87a8-469f-bb38-0e6277ec1892 (sshconfig) was prepared for execution. 2025-11-01 12:28:44.936655 | orchestrator | 2025-11-01 12:28:44 | INFO  | It takes a moment until task a3d931fa-87a8-469f-bb38-0e6277ec1892 (sshconfig) has been started and output is visible here. 2025-11-01 12:28:57.931491 | orchestrator | 2025-11-01 12:28:57.931605 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-11-01 12:28:57.931623 | orchestrator | 2025-11-01 12:28:57.931635 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-11-01 12:28:57.931647 | orchestrator | Saturday 01 November 2025 12:28:49 +0000 (0:00:00.176) 0:00:00.176 ***** 2025-11-01 12:28:57.931658 | orchestrator | ok: [testbed-manager] 2025-11-01 12:28:57.931670 | orchestrator | 2025-11-01 12:28:57.931682 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-11-01 12:28:57.931693 | orchestrator | Saturday 01 November 2025 12:28:50 +0000 (0:00:00.581) 0:00:00.757 ***** 2025-11-01 12:28:57.931704 | orchestrator | changed: [testbed-manager] 2025-11-01 12:28:57.931715 | orchestrator | 2025-11-01 12:28:57.931726 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-11-01 12:28:57.931762 | orchestrator | Saturday 01 November 2025 12:28:50 +0000 (0:00:00.580) 0:00:01.338 ***** 2025-11-01 12:28:57.931774 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-11-01 12:28:57.931786 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-11-01 12:28:57.931796 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-11-01 12:28:57.931807 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-11-01 12:28:57.931818 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-11-01 12:28:57.931829 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-11-01 12:28:57.931840 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-11-01 12:28:57.931850 | orchestrator | 2025-11-01 12:28:57.931861 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-11-01 12:28:57.931872 | orchestrator | Saturday 01 November 2025 12:28:56 +0000 (0:00:06.104) 0:00:07.443 ***** 2025-11-01 12:28:57.931883 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:28:57.931894 | orchestrator | 2025-11-01 12:28:57.931905 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-11-01 12:28:57.931915 | orchestrator | Saturday 01 November 2025 12:28:56 +0000 (0:00:00.090) 0:00:07.534 ***** 2025-11-01 12:28:57.931926 | orchestrator | changed: [testbed-manager] 2025-11-01 12:28:57.931937 | orchestrator | 2025-11-01 12:28:57.931948 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:28:57.931960 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 12:28:57.931971 | orchestrator | 2025-11-01 12:28:57.931983 | orchestrator | 2025-11-01 12:28:57.931995 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:28:57.932007 | orchestrator | Saturday 01 November 2025 12:28:57 +0000 (0:00:00.645) 0:00:08.180 ***** 2025-11-01 12:28:57.932020 | orchestrator | =============================================================================== 2025-11-01 12:28:57.932032 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.10s 2025-11-01 12:28:57.932045 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.65s 2025-11-01 12:28:57.932057 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.58s 2025-11-01 12:28:57.932069 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.58s 2025-11-01 12:28:57.932081 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2025-11-01 12:28:58.312006 | orchestrator | + osism apply known-hosts 2025-11-01 12:29:10.556526 | orchestrator | 2025-11-01 12:29:10 | INFO  | Task 5ff6bb1a-d6c5-44df-931e-d663075bb6d5 (known-hosts) was prepared for execution. 2025-11-01 12:29:10.556634 | orchestrator | 2025-11-01 12:29:10 | INFO  | It takes a moment until task 5ff6bb1a-d6c5-44df-931e-d663075bb6d5 (known-hosts) has been started and output is visible here. 2025-11-01 12:29:28.742784 | orchestrator | 2025-11-01 12:29:28.742893 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-11-01 12:29:28.742910 | orchestrator | 2025-11-01 12:29:28.742922 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-11-01 12:29:28.742934 | orchestrator | Saturday 01 November 2025 12:29:15 +0000 (0:00:00.194) 0:00:00.194 ***** 2025-11-01 12:29:28.742945 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-11-01 12:29:28.742957 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-11-01 12:29:28.742968 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-11-01 12:29:28.742979 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-11-01 12:29:28.742989 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-11-01 12:29:28.743000 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-11-01 12:29:28.743033 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-11-01 12:29:28.743044 | orchestrator | 2025-11-01 12:29:28.743066 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-11-01 12:29:28.743078 | orchestrator | Saturday 01 November 2025 12:29:21 +0000 (0:00:06.215) 0:00:06.410 ***** 2025-11-01 12:29:28.743090 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-11-01 12:29:28.743103 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-11-01 12:29:28.743114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-11-01 12:29:28.743125 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-11-01 12:29:28.743136 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-11-01 12:29:28.743147 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-11-01 12:29:28.743158 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-11-01 12:29:28.743169 | orchestrator | 2025-11-01 12:29:28.743180 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 12:29:28.743191 | orchestrator | Saturday 01 November 2025 12:29:21 +0000 (0:00:00.168) 0:00:06.578 ***** 2025-11-01 12:29:28.743202 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDnNVtx7t13Wj1PRlm+6Jf52LIk/gtZxb1znm2o1mCbR) 2025-11-01 12:29:28.743217 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8Wi0VEOkOTd4twcLYa7QWy1Ez7Jb6AzearUYyT6h21oE1uCN/f9YdkyOvlcWCPEWV9/JgnLy49Hgb8l3up0iq78amiYyS2PVmclkR8tf7rMCOhbN91k9dVHKFhtJWGrEL0UHJwqKLuOSlhHqjmQXzH0/DviAbiAfOF0zd7AlIVAtpl6mJR914kPhbPWhvFInL8RIPQqUYhcZfp35oM1wYnh12O6dyMHF9y4tcPUp41Bm6mZJ0jy+0rWRkD5ZMa+OPlwLBS2PfCw4o/RRgiyo9CbnDRDm7TsC/kNibSD1drkR+ni3h0nMiaZ85ZNBgyGIpOE1kaRrmFzARKvlScEXY4OuPNNEpjScUtMJGMS4JDkf+4/64iTfRp4A4Q1tSb5z9RKV7GVD2yxU3M2Au5BKObN1Cas6OR8MFElzWGT01nIZ27qpdty32L8My9h2NBSzbM3IYS/Rw+XohdOKZF/naRvdnJEH5z7gZ1j9I0Abj8cA4HYnZ3QBgEwCvxnTYbHs=) 2025-11-01 12:29:28.743231 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEupSN5zc2Sfq4j6SNwmX8LTMSXLabmrDzwU7e33jntBsLrifMjdKYqvd+W76tAWC8BqYtckJakjhgMcA2xBYXc=) 2025-11-01 12:29:28.743244 | orchestrator | 2025-11-01 12:29:28.743255 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 12:29:28.743266 | orchestrator | Saturday 01 November 2025 12:29:22 +0000 (0:00:01.294) 0:00:07.873 ***** 2025-11-01 12:29:28.743277 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDThyuHj+QO2kqxbCXBfDVVmODZwdWDBG9xY614cN/lO6tAb974tyrUiAKcIczSKBH9H4iL98OPk7f4qY/FqA+U=) 2025-11-01 12:29:28.743315 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOv+zn7sIESq5nqr56kv6DffVfyXAFGwWeblXZIfZn2HWRUozA3TgjhgUyIyNkM5x4B51H6JNvo76dntF2OYX+bolTI6MHNcSYy2Pfo7PfAsUbXHclg1l1FkcNn6tzXzBCmi2xSAW+X1VjSJoUVedSXJSsjkRPK8d/gsYC8MdVPqQE0KKY7m7ZgStKguB7EnPa5F1RlHX72MUhO1IjWcnr4xKXhZPUcMk+A8BvVi6Zcd0hXMqdKwFU6jYfwJgY+R/b1JvIR8rM4QKa4NOBMRCzZUXcyOeaTNdgb3YW9BIwi7dO+bQOQF+d2EdZzIJU5GzfKi8EweX+WQ+taEfksdDYTBTR8qYEU0LRPyRvZBtGv4pkQpVemV+RA0LlJhfc1RNN4ntZ4OyjFEzmFaQ1pH7CKaeNC2ymjw7g+AdMrqyPbik5yObrnhulNGQpl8Z0jNzSRND4f9Trfg2QJHpQqkBJxxkhzX1GqZIPmXnTgyD0WKjc912d4iw5DM3QCPiX9I8=) 2025-11-01 12:29:28.743337 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILXDKwICrs0vBl5zMpNkLVwufuHGEJ57LtbJM5yzDX9h) 2025-11-01 12:29:28.743349 | orchestrator | 2025-11-01 12:29:28.743362 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 12:29:28.743375 | orchestrator | Saturday 01 November 2025 12:29:24 +0000 (0:00:01.171) 0:00:09.045 ***** 2025-11-01 12:29:28.743486 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5FFgmbLI7hk2ryZ2EGuhfA+5Q7NDbdTC1Sbldau77h6nB70iPXgn5gOdd20qxzLf4TXMzp67bgayYbjxMRBsf0eaoHRuSiena7pmgZ+sL363LQBu0oH1+16IpKHr/k/5Y62rPekH96wLpMXaMWZx1xkU86aRp9be6FsRFSJVRxg41qoAYwkyaV3K0BO3ik0Q2xMOiOWRf2blxUQzPZx4fPcYbYtdn5cJrsPcsL8127/ERKelm6VL8xR9sdKlU8Ow/bhmgN/YRKYmg2KXeGy5EuwjGk1o+GPlsQu7ZXGzDi1TFbE5JWl1VEkPh1FH88ttLkAcegp6ILEUHfCGklUXvmY0QA7DI5/9dSh+QpEbgUkRNZr+GmpncUke0BDFA6xox18e4YWWKabCFBV6P4aE14B58nle0qSX0QisDAcUtsE+YAW7ibCfjCS2DBBc9rewFraVG25QNF0zmfnT+uosp6eWzCRN6bkU9nzVutULgg3w9no5JE/JuLN/9oQQ2V5U=) 2025-11-01 12:29:28.743501 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP6wok8M3l2aZoTcfMcm4ozrwHwFtju79jdTfHFvSxaYQGcHccJzaIm2ay3YoLq7Sx4JGZ1oZ2Fjc/52YdyaXJg=) 2025-11-01 12:29:28.743514 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKG4CZ0ahClzNQSjRTStm3kT1BbqM8QjiQ+XjAt6G4tV) 2025-11-01 12:29:28.743527 | orchestrator | 2025-11-01 12:29:28.743540 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 12:29:28.743552 | orchestrator | Saturday 01 November 2025 12:29:25 +0000 (0:00:01.162) 0:00:10.207 ***** 2025-11-01 12:29:28.743565 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQ8tb4hkCDV8Y96nicwRUDIFm2X4F1YCn99ZGO5Vvx4PWLXUnPPIlYx/ZZvkYiNsKv+e+TVm/Vto/8HjslSwltHUazGjAEXiVpbntR1zSPkpew1pM4rB6eNTnfdR4EUM9c8cwO9nuZQMGu3P/VxTTTqROhmakLkUtjfFpD2zBuPe0j5uy+jl9ptAzT2J5oBZNMSrkbFs8QFQzvf8l+LF11DK+5yBzLdGs+pzqNvKP/SOTxzfXV2zAP+MbLoG0ZoGayptoO+h3R0mFkkjYBx8VWqp9XHqbAMVCxNzST4UM2GXEBKTo/TXAZ6BbHV5VdWf7uKeSX1oYIJcZ2qqxYUjvDwdrWlauX9HdoFHHXXsue3UM7y+EGIAuk+PDdLviC/PgMlPN3MWTQz4t0yBsMyWn21k7eVA3Gj12DfemcWRHvcHkcP+0OurlZIZMC0BONWi25geFDmj2wtBKU/EIpKn/TzOmK8UqNFd0GBc17j3A12b52U9WcbGnItuXH0zmdXgs=) 2025-11-01 12:29:28.743578 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIHsxGk4GZQUJP+GI2k6bGo4af9ynipbA5SupEOo0xau0IoW5/VhuYcKh5Yf8T7yQxjLGHYg1aR2eUqIpN6YuCs=) 2025-11-01 12:29:28.743591 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMjffX7anwwg4s039GsC00Dh0rPHNIlWEYtwBKXttFuf) 2025-11-01 12:29:28.743603 | orchestrator | 2025-11-01 12:29:28.743616 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 12:29:28.743628 | orchestrator | Saturday 01 November 2025 12:29:26 +0000 (0:00:01.125) 0:00:11.332 ***** 2025-11-01 12:29:28.743642 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTbYuYmuavSG2Dj6HmLb5Ly5D6VD1PY+sYFf0Dr9BZ+t6TaWlx2FjPs5iCF4Zud8L70aplfa1gyPvRUCLKbXwRRl0iJyz2W+VyQjJ+s02XqQHiEBgbzNb+rzuCGu5wc+etYwULNM7zz4HRASClto4ia4OYYSylCpXUOzwkaN6y7sl5f5k/1RPoc2NDLFIh5lSTzueIT241CKy2ENkcmzXGytoLhS7tJOp63a1HqFYnawsnQVvqoQB00W4Hrt9XTRdXYbsahVmAI5s09j8tGksBjGWqrOJ4PLt293KRflEYxbxfqQv9rVb6JffGlmF1vNnB3PEpvNe6nJAZ8pjeeu8OHuCaERvSCvSa7N4eQu+ZewvtjmwPxeczVjv94dK7fn66wN+Tse6RaOXupRuuDkHNuzVEQaXS4zIXn4OjNTjXnZVWQmofbr61ifhzT/rhhBsvxvSQVZf8qMRuHJU0BItnP9V/ozx/1qgVLR0uQx3YCQJoT5D9XQcs5pRSLDyEnxE=) 2025-11-01 12:29:28.743655 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPdMpA3NKYDxMQHSDYNXTlcj8dBwg0IS8sHqQkFiDjM7YMM/Lu/Ggd/eqvx2RI6tIPRvaIf8bLe90zzswibyDGw=) 2025-11-01 12:29:28.743674 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOFxO4SGwFml7B5TxnWRM5f5J42C5HGknfyFpHUJM8lA) 2025-11-01 12:29:28.743685 | orchestrator | 2025-11-01 12:29:28.743696 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 12:29:28.743707 | orchestrator | Saturday 01 November 2025 12:29:27 +0000 (0:00:01.177) 0:00:12.510 ***** 2025-11-01 12:29:28.743727 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCg4X1YJI7To7NcO0UGXXRKwH1aIw8NFUXIW5gqK2pkM5YlUNufeD64fcGfoXf5yqmcO8jIkY5PLsoF3IsW0gYDWnnFQfdKPmhzLSaCpioheLiVjjqGEL44jLp61UVeS2nHYO/WhFs8SlmD8HabUb2kpnp+uYmy5qFbQ6VVq9M7Qu/eojytWLjXqyTDXdbrlnu/T4nwsFanY+7XN3FqXPPxJ0TZssLXbgDampfGP3/DJ+gakv6rwJZnidGsGZlA9/FTUaviGJXULhxvIy8SJ9zMImaR6ryUJoaI06eXZJ0oqkhgRYsLm6tcQZ5y5+eK97WSzLb+HEJiWQA7ouTYroeFoYC3ZEra+2GmvrkHj0WhQ21jpoT867V6UzgOaDDe16V9MbHgaL0CYwDsjOUY+z2ALyZJqfI47qD6gqRA8n4xHdsnI1EIEsQRUjWDG3kXmlyjowouiGnbpVCgjDA6+OXSgHRW+RwdgVxEDeK8xb6/xjV70fyUnJr3pV/edxBOdFE=) 2025-11-01 12:29:40.425308 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMdl7IdRD6RJDw5YMXBsWKUvPISxGXW/uXfCErxeSqK/9IeIhz+PomwqHmtzSlqQ0+16+MfLlCqU9YRw0blYYTQ=) 2025-11-01 12:29:40.425386 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIxkODcWNrbE2iYP0NZ4movz7sPwJr9q4t8p46g8UCs8) 2025-11-01 12:29:40.425395 | orchestrator | 2025-11-01 12:29:40.425400 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 12:29:40.425406 | orchestrator | Saturday 01 November 2025 12:29:28 +0000 (0:00:01.211) 0:00:13.721 ***** 2025-11-01 12:29:40.425411 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHBomX3IaHUGKcwo9qmzfNLNQqJkP6cfM45Gj8f1T/GG) 2025-11-01 12:29:40.425416 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXzjpWfQsCub5qgOPL8XalxheuRhA49L229FmdVgOAEUyWtBXzXiWPfnfGKXvrLE+eJ4mrHntVU+o8vqlG35i0VOIV/q7E5dk8wUFR2Po3WVg8oMY5Bbhdjd9oYYQAryH0zSgbKXPMwmDwBHs+RVF2n0fmIr+uZ4RImE6TovEfTIvSG7Z0ewoPQOrfZZ4n99zpAXLhdxtyZpNzVp/BdOL/Qwb7fnEPVWtTfH6deOYczrxzge6JuKcQQb1JBzH6XWejUL1XftYPHHKjsn+7ClA+HyLcI8yCUepB6doCAE5FxD1OF5fj7Q7d+nlOwtRupvi+u3Uva2s+HAbiYDq2y0or6X2JGtlnWcFNIxmQG7T1AbGcxQVg9lMmD3cmIHtgZ7BQ9XPpAsCps0maWe1lwByhzpdXKS2wvWLf92RsqRhE7PcZdpUaKSm8v7anhW4rNsV+j0axlHayhMGPsT89RQi7/oMvkj73atZdvyxOiHhIoKBC14qFxOHxA1syRBdJG70=) 2025-11-01 12:29:40.425422 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPWQeYMlosRfwsU+auiWIpKghLT3VJ5h1obUmsLzSP23tFMxl9qLt4ADt8z1TBmmOLfRKo62iiEps5FMDp4Gu1c=) 2025-11-01 12:29:40.425456 | orchestrator | 2025-11-01 12:29:40.425475 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-11-01 12:29:40.425480 | orchestrator | Saturday 01 November 2025 12:29:29 +0000 (0:00:01.154) 0:00:14.876 ***** 2025-11-01 12:29:40.425485 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-11-01 12:29:40.425490 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-11-01 12:29:40.425494 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-11-01 12:29:40.425498 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-11-01 12:29:40.425503 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-11-01 12:29:40.425507 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-11-01 12:29:40.425514 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-11-01 12:29:40.425518 | orchestrator | 2025-11-01 12:29:40.425522 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-11-01 12:29:40.425528 | orchestrator | Saturday 01 November 2025 12:29:35 +0000 (0:00:05.557) 0:00:20.434 ***** 2025-11-01 12:29:40.425547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-11-01 12:29:40.425553 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-11-01 12:29:40.425557 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-11-01 12:29:40.425561 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-11-01 12:29:40.425566 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-11-01 12:29:40.425570 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-11-01 12:29:40.425574 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-11-01 12:29:40.425578 | orchestrator | 2025-11-01 12:29:40.425582 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 12:29:40.425586 | orchestrator | Saturday 01 November 2025 12:29:35 +0000 (0:00:00.208) 0:00:20.642 ***** 2025-11-01 12:29:40.425591 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEupSN5zc2Sfq4j6SNwmX8LTMSXLabmrDzwU7e33jntBsLrifMjdKYqvd+W76tAWC8BqYtckJakjhgMcA2xBYXc=) 2025-11-01 12:29:40.425606 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8Wi0VEOkOTd4twcLYa7QWy1Ez7Jb6AzearUYyT6h21oE1uCN/f9YdkyOvlcWCPEWV9/JgnLy49Hgb8l3up0iq78amiYyS2PVmclkR8tf7rMCOhbN91k9dVHKFhtJWGrEL0UHJwqKLuOSlhHqjmQXzH0/DviAbiAfOF0zd7AlIVAtpl6mJR914kPhbPWhvFInL8RIPQqUYhcZfp35oM1wYnh12O6dyMHF9y4tcPUp41Bm6mZJ0jy+0rWRkD5ZMa+OPlwLBS2PfCw4o/RRgiyo9CbnDRDm7TsC/kNibSD1drkR+ni3h0nMiaZ85ZNBgyGIpOE1kaRrmFzARKvlScEXY4OuPNNEpjScUtMJGMS4JDkf+4/64iTfRp4A4Q1tSb5z9RKV7GVD2yxU3M2Au5BKObN1Cas6OR8MFElzWGT01nIZ27qpdty32L8My9h2NBSzbM3IYS/Rw+XohdOKZF/naRvdnJEH5z7gZ1j9I0Abj8cA4HYnZ3QBgEwCvxnTYbHs=) 2025-11-01 12:29:40.425611 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDnNVtx7t13Wj1PRlm+6Jf52LIk/gtZxb1znm2o1mCbR) 2025-11-01 12:29:40.425615 | orchestrator | 2025-11-01 12:29:40.425620 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 12:29:40.425624 | orchestrator | Saturday 01 November 2025 12:29:36 +0000 (0:00:01.136) 0:00:21.779 ***** 2025-11-01 12:29:40.425628 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDThyuHj+QO2kqxbCXBfDVVmODZwdWDBG9xY614cN/lO6tAb974tyrUiAKcIczSKBH9H4iL98OPk7f4qY/FqA+U=) 2025-11-01 12:29:40.425633 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOv+zn7sIESq5nqr56kv6DffVfyXAFGwWeblXZIfZn2HWRUozA3TgjhgUyIyNkM5x4B51H6JNvo76dntF2OYX+bolTI6MHNcSYy2Pfo7PfAsUbXHclg1l1FkcNn6tzXzBCmi2xSAW+X1VjSJoUVedSXJSsjkRPK8d/gsYC8MdVPqQE0KKY7m7ZgStKguB7EnPa5F1RlHX72MUhO1IjWcnr4xKXhZPUcMk+A8BvVi6Zcd0hXMqdKwFU6jYfwJgY+R/b1JvIR8rM4QKa4NOBMRCzZUXcyOeaTNdgb3YW9BIwi7dO+bQOQF+d2EdZzIJU5GzfKi8EweX+WQ+taEfksdDYTBTR8qYEU0LRPyRvZBtGv4pkQpVemV+RA0LlJhfc1RNN4ntZ4OyjFEzmFaQ1pH7CKaeNC2ymjw7g+AdMrqyPbik5yObrnhulNGQpl8Z0jNzSRND4f9Trfg2QJHpQqkBJxxkhzX1GqZIPmXnTgyD0WKjc912d4iw5DM3QCPiX9I8=) 2025-11-01 12:29:40.425637 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILXDKwICrs0vBl5zMpNkLVwufuHGEJ57LtbJM5yzDX9h) 2025-11-01 12:29:40.425646 | orchestrator | 2025-11-01 12:29:40.425651 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 12:29:40.425655 | orchestrator | Saturday 01 November 2025 12:29:37 +0000 (0:00:01.208) 0:00:22.988 ***** 2025-11-01 12:29:40.425659 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP6wok8M3l2aZoTcfMcm4ozrwHwFtju79jdTfHFvSxaYQGcHccJzaIm2ay3YoLq7Sx4JGZ1oZ2Fjc/52YdyaXJg=) 2025-11-01 12:29:40.425663 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKG4CZ0ahClzNQSjRTStm3kT1BbqM8QjiQ+XjAt6G4tV) 2025-11-01 12:29:40.425668 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5FFgmbLI7hk2ryZ2EGuhfA+5Q7NDbdTC1Sbldau77h6nB70iPXgn5gOdd20qxzLf4TXMzp67bgayYbjxMRBsf0eaoHRuSiena7pmgZ+sL363LQBu0oH1+16IpKHr/k/5Y62rPekH96wLpMXaMWZx1xkU86aRp9be6FsRFSJVRxg41qoAYwkyaV3K0BO3ik0Q2xMOiOWRf2blxUQzPZx4fPcYbYtdn5cJrsPcsL8127/ERKelm6VL8xR9sdKlU8Ow/bhmgN/YRKYmg2KXeGy5EuwjGk1o+GPlsQu7ZXGzDi1TFbE5JWl1VEkPh1FH88ttLkAcegp6ILEUHfCGklUXvmY0QA7DI5/9dSh+QpEbgUkRNZr+GmpncUke0BDFA6xox18e4YWWKabCFBV6P4aE14B58nle0qSX0QisDAcUtsE+YAW7ibCfjCS2DBBc9rewFraVG25QNF0zmfnT+uosp6eWzCRN6bkU9nzVutULgg3w9no5JE/JuLN/9oQQ2V5U=) 2025-11-01 12:29:40.425672 | orchestrator | 2025-11-01 12:29:40.425677 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 12:29:40.425681 | orchestrator | Saturday 01 November 2025 12:29:39 +0000 (0:00:01.208) 0:00:24.196 ***** 2025-11-01 12:29:40.425688 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQ8tb4hkCDV8Y96nicwRUDIFm2X4F1YCn99ZGO5Vvx4PWLXUnPPIlYx/ZZvkYiNsKv+e+TVm/Vto/8HjslSwltHUazGjAEXiVpbntR1zSPkpew1pM4rB6eNTnfdR4EUM9c8cwO9nuZQMGu3P/VxTTTqROhmakLkUtjfFpD2zBuPe0j5uy+jl9ptAzT2J5oBZNMSrkbFs8QFQzvf8l+LF11DK+5yBzLdGs+pzqNvKP/SOTxzfXV2zAP+MbLoG0ZoGayptoO+h3R0mFkkjYBx8VWqp9XHqbAMVCxNzST4UM2GXEBKTo/TXAZ6BbHV5VdWf7uKeSX1oYIJcZ2qqxYUjvDwdrWlauX9HdoFHHXXsue3UM7y+EGIAuk+PDdLviC/PgMlPN3MWTQz4t0yBsMyWn21k7eVA3Gj12DfemcWRHvcHkcP+0OurlZIZMC0BONWi25geFDmj2wtBKU/EIpKn/TzOmK8UqNFd0GBc17j3A12b52U9WcbGnItuXH0zmdXgs=) 2025-11-01 12:29:40.425693 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIHsxGk4GZQUJP+GI2k6bGo4af9ynipbA5SupEOo0xau0IoW5/VhuYcKh5Yf8T7yQxjLGHYg1aR2eUqIpN6YuCs=) 2025-11-01 12:29:40.425703 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMjffX7anwwg4s039GsC00Dh0rPHNIlWEYtwBKXttFuf) 2025-11-01 12:29:45.324557 | orchestrator | 2025-11-01 12:29:45.324659 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 12:29:45.324676 | orchestrator | Saturday 01 November 2025 12:29:40 +0000 (0:00:01.206) 0:00:25.402 ***** 2025-11-01 12:29:45.324690 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPdMpA3NKYDxMQHSDYNXTlcj8dBwg0IS8sHqQkFiDjM7YMM/Lu/Ggd/eqvx2RI6tIPRvaIf8bLe90zzswibyDGw=) 2025-11-01 12:29:45.324704 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOFxO4SGwFml7B5TxnWRM5f5J42C5HGknfyFpHUJM8lA) 2025-11-01 12:29:45.324719 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTbYuYmuavSG2Dj6HmLb5Ly5D6VD1PY+sYFf0Dr9BZ+t6TaWlx2FjPs5iCF4Zud8L70aplfa1gyPvRUCLKbXwRRl0iJyz2W+VyQjJ+s02XqQHiEBgbzNb+rzuCGu5wc+etYwULNM7zz4HRASClto4ia4OYYSylCpXUOzwkaN6y7sl5f5k/1RPoc2NDLFIh5lSTzueIT241CKy2ENkcmzXGytoLhS7tJOp63a1HqFYnawsnQVvqoQB00W4Hrt9XTRdXYbsahVmAI5s09j8tGksBjGWqrOJ4PLt293KRflEYxbxfqQv9rVb6JffGlmF1vNnB3PEpvNe6nJAZ8pjeeu8OHuCaERvSCvSa7N4eQu+ZewvtjmwPxeczVjv94dK7fn66wN+Tse6RaOXupRuuDkHNuzVEQaXS4zIXn4OjNTjXnZVWQmofbr61ifhzT/rhhBsvxvSQVZf8qMRuHJU0BItnP9V/ozx/1qgVLR0uQx3YCQJoT5D9XQcs5pRSLDyEnxE=) 2025-11-01 12:29:45.324733 | orchestrator | 2025-11-01 12:29:45.324745 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 12:29:45.324781 | orchestrator | Saturday 01 November 2025 12:29:41 +0000 (0:00:01.206) 0:00:26.609 ***** 2025-11-01 12:29:45.324793 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCg4X1YJI7To7NcO0UGXXRKwH1aIw8NFUXIW5gqK2pkM5YlUNufeD64fcGfoXf5yqmcO8jIkY5PLsoF3IsW0gYDWnnFQfdKPmhzLSaCpioheLiVjjqGEL44jLp61UVeS2nHYO/WhFs8SlmD8HabUb2kpnp+uYmy5qFbQ6VVq9M7Qu/eojytWLjXqyTDXdbrlnu/T4nwsFanY+7XN3FqXPPxJ0TZssLXbgDampfGP3/DJ+gakv6rwJZnidGsGZlA9/FTUaviGJXULhxvIy8SJ9zMImaR6ryUJoaI06eXZJ0oqkhgRYsLm6tcQZ5y5+eK97WSzLb+HEJiWQA7ouTYroeFoYC3ZEra+2GmvrkHj0WhQ21jpoT867V6UzgOaDDe16V9MbHgaL0CYwDsjOUY+z2ALyZJqfI47qD6gqRA8n4xHdsnI1EIEsQRUjWDG3kXmlyjowouiGnbpVCgjDA6+OXSgHRW+RwdgVxEDeK8xb6/xjV70fyUnJr3pV/edxBOdFE=) 2025-11-01 12:29:45.324806 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMdl7IdRD6RJDw5YMXBsWKUvPISxGXW/uXfCErxeSqK/9IeIhz+PomwqHmtzSlqQ0+16+MfLlCqU9YRw0blYYTQ=) 2025-11-01 12:29:45.324817 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIxkODcWNrbE2iYP0NZ4movz7sPwJr9q4t8p46g8UCs8) 2025-11-01 12:29:45.324829 | orchestrator | 2025-11-01 12:29:45.324840 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-11-01 12:29:45.324850 | orchestrator | Saturday 01 November 2025 12:29:42 +0000 (0:00:01.150) 0:00:27.760 ***** 2025-11-01 12:29:45.324861 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHBomX3IaHUGKcwo9qmzfNLNQqJkP6cfM45Gj8f1T/GG) 2025-11-01 12:29:45.324873 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXzjpWfQsCub5qgOPL8XalxheuRhA49L229FmdVgOAEUyWtBXzXiWPfnfGKXvrLE+eJ4mrHntVU+o8vqlG35i0VOIV/q7E5dk8wUFR2Po3WVg8oMY5Bbhdjd9oYYQAryH0zSgbKXPMwmDwBHs+RVF2n0fmIr+uZ4RImE6TovEfTIvSG7Z0ewoPQOrfZZ4n99zpAXLhdxtyZpNzVp/BdOL/Qwb7fnEPVWtTfH6deOYczrxzge6JuKcQQb1JBzH6XWejUL1XftYPHHKjsn+7ClA+HyLcI8yCUepB6doCAE5FxD1OF5fj7Q7d+nlOwtRupvi+u3Uva2s+HAbiYDq2y0or6X2JGtlnWcFNIxmQG7T1AbGcxQVg9lMmD3cmIHtgZ7BQ9XPpAsCps0maWe1lwByhzpdXKS2wvWLf92RsqRhE7PcZdpUaKSm8v7anhW4rNsV+j0axlHayhMGPsT89RQi7/oMvkj73atZdvyxOiHhIoKBC14qFxOHxA1syRBdJG70=) 2025-11-01 12:29:45.324885 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPWQeYMlosRfwsU+auiWIpKghLT3VJ5h1obUmsLzSP23tFMxl9qLt4ADt8z1TBmmOLfRKo62iiEps5FMDp4Gu1c=) 2025-11-01 12:29:45.324896 | orchestrator | 2025-11-01 12:29:45.324907 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-11-01 12:29:45.324917 | orchestrator | Saturday 01 November 2025 12:29:43 +0000 (0:00:01.166) 0:00:28.927 ***** 2025-11-01 12:29:45.324929 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-11-01 12:29:45.324940 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-11-01 12:29:45.324951 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-11-01 12:29:45.324962 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-11-01 12:29:45.324972 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-11-01 12:29:45.324983 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-11-01 12:29:45.324994 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-11-01 12:29:45.325005 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:29:45.325017 | orchestrator | 2025-11-01 12:29:45.325043 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-11-01 12:29:45.325057 | orchestrator | Saturday 01 November 2025 12:29:44 +0000 (0:00:00.198) 0:00:29.126 ***** 2025-11-01 12:29:45.325069 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:29:45.325082 | orchestrator | 2025-11-01 12:29:45.325094 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-11-01 12:29:45.325107 | orchestrator | Saturday 01 November 2025 12:29:44 +0000 (0:00:00.063) 0:00:29.189 ***** 2025-11-01 12:29:45.325126 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:29:45.325139 | orchestrator | 2025-11-01 12:29:45.325151 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-11-01 12:29:45.325164 | orchestrator | Saturday 01 November 2025 12:29:44 +0000 (0:00:00.063) 0:00:29.253 ***** 2025-11-01 12:29:45.325176 | orchestrator | changed: [testbed-manager] 2025-11-01 12:29:45.325188 | orchestrator | 2025-11-01 12:29:45.325200 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:29:45.325213 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-01 12:29:45.325226 | orchestrator | 2025-11-01 12:29:45.325238 | orchestrator | 2025-11-01 12:29:45.325251 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:29:45.325264 | orchestrator | Saturday 01 November 2025 12:29:45 +0000 (0:00:00.796) 0:00:30.049 ***** 2025-11-01 12:29:45.325276 | orchestrator | =============================================================================== 2025-11-01 12:29:45.325288 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.22s 2025-11-01 12:29:45.325301 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.56s 2025-11-01 12:29:45.325313 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.29s 2025-11-01 12:29:45.325326 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-11-01 12:29:45.325338 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-11-01 12:29:45.325350 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-11-01 12:29:45.325363 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-11-01 12:29:45.325375 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-11-01 12:29:45.325388 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-11-01 12:29:45.325400 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-11-01 12:29:45.325458 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-11-01 12:29:45.325472 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-11-01 12:29:45.325488 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-11-01 12:29:45.325499 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-11-01 12:29:45.325510 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-11-01 12:29:45.325521 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-11-01 12:29:45.325531 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.80s 2025-11-01 12:29:45.325542 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.21s 2025-11-01 12:29:45.325553 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.20s 2025-11-01 12:29:45.325564 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-11-01 12:29:45.701180 | orchestrator | + osism apply squid 2025-11-01 12:29:58.087285 | orchestrator | 2025-11-01 12:29:58 | INFO  | Task 8a0e3396-c87b-4b24-8251-f886bd4616f2 (squid) was prepared for execution. 2025-11-01 12:29:58.087416 | orchestrator | 2025-11-01 12:29:58 | INFO  | It takes a moment until task 8a0e3396-c87b-4b24-8251-f886bd4616f2 (squid) has been started and output is visible here. 2025-11-01 12:31:56.928218 | orchestrator | 2025-11-01 12:31:56.928328 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-11-01 12:31:56.928344 | orchestrator | 2025-11-01 12:31:56.928356 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-11-01 12:31:56.928368 | orchestrator | Saturday 01 November 2025 12:30:02 +0000 (0:00:00.195) 0:00:00.195 ***** 2025-11-01 12:31:56.928403 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-11-01 12:31:56.928459 | orchestrator | 2025-11-01 12:31:56.928473 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-11-01 12:31:56.928484 | orchestrator | Saturday 01 November 2025 12:30:02 +0000 (0:00:00.124) 0:00:00.319 ***** 2025-11-01 12:31:56.928495 | orchestrator | ok: [testbed-manager] 2025-11-01 12:31:56.928507 | orchestrator | 2025-11-01 12:31:56.928518 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-11-01 12:31:56.928529 | orchestrator | Saturday 01 November 2025 12:30:04 +0000 (0:00:01.711) 0:00:02.031 ***** 2025-11-01 12:31:56.928540 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-11-01 12:31:56.928551 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-11-01 12:31:56.928562 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-11-01 12:31:56.928573 | orchestrator | 2025-11-01 12:31:56.928583 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-11-01 12:31:56.928594 | orchestrator | Saturday 01 November 2025 12:30:05 +0000 (0:00:01.309) 0:00:03.341 ***** 2025-11-01 12:31:56.928605 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-11-01 12:31:56.928616 | orchestrator | 2025-11-01 12:31:56.928627 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-11-01 12:31:56.928637 | orchestrator | Saturday 01 November 2025 12:30:07 +0000 (0:00:01.158) 0:00:04.499 ***** 2025-11-01 12:31:56.928648 | orchestrator | ok: [testbed-manager] 2025-11-01 12:31:56.928659 | orchestrator | 2025-11-01 12:31:56.928670 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-11-01 12:31:56.928680 | orchestrator | Saturday 01 November 2025 12:30:07 +0000 (0:00:00.368) 0:00:04.868 ***** 2025-11-01 12:31:56.928691 | orchestrator | changed: [testbed-manager] 2025-11-01 12:31:56.928702 | orchestrator | 2025-11-01 12:31:56.928713 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-11-01 12:31:56.928723 | orchestrator | Saturday 01 November 2025 12:30:08 +0000 (0:00:01.003) 0:00:05.871 ***** 2025-11-01 12:31:56.928734 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-11-01 12:31:56.928746 | orchestrator | ok: [testbed-manager] 2025-11-01 12:31:56.928756 | orchestrator | 2025-11-01 12:31:56.928769 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-11-01 12:31:56.928781 | orchestrator | Saturday 01 November 2025 12:30:43 +0000 (0:00:35.110) 0:00:40.981 ***** 2025-11-01 12:31:56.928795 | orchestrator | changed: [testbed-manager] 2025-11-01 12:31:56.928807 | orchestrator | 2025-11-01 12:31:56.928819 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-11-01 12:31:56.928832 | orchestrator | Saturday 01 November 2025 12:30:55 +0000 (0:00:12.194) 0:00:53.176 ***** 2025-11-01 12:31:56.928844 | orchestrator | Pausing for 60 seconds 2025-11-01 12:31:56.928856 | orchestrator | changed: [testbed-manager] 2025-11-01 12:31:56.928869 | orchestrator | 2025-11-01 12:31:56.928881 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-11-01 12:31:56.928893 | orchestrator | Saturday 01 November 2025 12:31:55 +0000 (0:01:00.092) 0:01:53.268 ***** 2025-11-01 12:31:56.928906 | orchestrator | ok: [testbed-manager] 2025-11-01 12:31:56.928918 | orchestrator | 2025-11-01 12:31:56.928930 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-11-01 12:31:56.928942 | orchestrator | Saturday 01 November 2025 12:31:55 +0000 (0:00:00.077) 0:01:53.346 ***** 2025-11-01 12:31:56.928954 | orchestrator | changed: [testbed-manager] 2025-11-01 12:31:56.928966 | orchestrator | 2025-11-01 12:31:56.928978 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:31:56.928991 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:31:56.929011 | orchestrator | 2025-11-01 12:31:56.929023 | orchestrator | 2025-11-01 12:31:56.929035 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:31:56.929048 | orchestrator | Saturday 01 November 2025 12:31:56 +0000 (0:00:00.732) 0:01:54.079 ***** 2025-11-01 12:31:56.929060 | orchestrator | =============================================================================== 2025-11-01 12:31:56.929073 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2025-11-01 12:31:56.929085 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 35.11s 2025-11-01 12:31:56.929098 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.19s 2025-11-01 12:31:56.929110 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.71s 2025-11-01 12:31:56.929123 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.31s 2025-11-01 12:31:56.929133 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.16s 2025-11-01 12:31:56.929144 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.00s 2025-11-01 12:31:56.929155 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.73s 2025-11-01 12:31:56.929165 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-11-01 12:31:56.929176 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.12s 2025-11-01 12:31:56.929186 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2025-11-01 12:31:57.290214 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-01 12:31:57.290309 | orchestrator | ++ semver latest 9.0.0 2025-11-01 12:31:57.358155 | orchestrator | + [[ -1 -lt 0 ]] 2025-11-01 12:31:57.358198 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-01 12:31:57.359018 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-11-01 12:32:09.699996 | orchestrator | 2025-11-01 12:32:09 | INFO  | Task 691b548a-28ea-42f9-a5fa-56ce707b90ff (operator) was prepared for execution. 2025-11-01 12:32:09.700099 | orchestrator | 2025-11-01 12:32:09 | INFO  | It takes a moment until task 691b548a-28ea-42f9-a5fa-56ce707b90ff (operator) has been started and output is visible here. 2025-11-01 12:32:26.990294 | orchestrator | 2025-11-01 12:32:26.990396 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-11-01 12:32:26.990411 | orchestrator | 2025-11-01 12:32:26.990469 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-01 12:32:26.990479 | orchestrator | Saturday 01 November 2025 12:32:14 +0000 (0:00:00.184) 0:00:00.184 ***** 2025-11-01 12:32:26.990489 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:32:26.990501 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:32:26.990511 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:32:26.990520 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:32:26.990530 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:32:26.990540 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:32:26.990549 | orchestrator | 2025-11-01 12:32:26.990559 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-11-01 12:32:26.990569 | orchestrator | Saturday 01 November 2025 12:32:18 +0000 (0:00:03.547) 0:00:03.732 ***** 2025-11-01 12:32:26.990579 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:32:26.990589 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:32:26.990598 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:32:26.990608 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:32:26.990617 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:32:26.990627 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:32:26.990640 | orchestrator | 2025-11-01 12:32:26.990650 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-11-01 12:32:26.990659 | orchestrator | 2025-11-01 12:32:26.990669 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-11-01 12:32:26.990679 | orchestrator | Saturday 01 November 2025 12:32:18 +0000 (0:00:00.859) 0:00:04.592 ***** 2025-11-01 12:32:26.990688 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:32:26.990717 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:32:26.990727 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:32:26.990737 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:32:26.990746 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:32:26.990755 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:32:26.990765 | orchestrator | 2025-11-01 12:32:26.990775 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-11-01 12:32:26.990784 | orchestrator | Saturday 01 November 2025 12:32:19 +0000 (0:00:00.182) 0:00:04.774 ***** 2025-11-01 12:32:26.990794 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:32:26.990803 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:32:26.990812 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:32:26.990821 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:32:26.990831 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:32:26.990840 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:32:26.990851 | orchestrator | 2025-11-01 12:32:26.990879 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-11-01 12:32:26.990890 | orchestrator | Saturday 01 November 2025 12:32:19 +0000 (0:00:00.208) 0:00:04.982 ***** 2025-11-01 12:32:26.990901 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:32:26.990913 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:32:26.990924 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:32:26.990935 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:32:26.990946 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:32:26.990957 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:32:26.990968 | orchestrator | 2025-11-01 12:32:26.990979 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-11-01 12:32:26.990990 | orchestrator | Saturday 01 November 2025 12:32:20 +0000 (0:00:00.738) 0:00:05.721 ***** 2025-11-01 12:32:26.991000 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:32:26.991011 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:32:26.991022 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:32:26.991032 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:32:26.991043 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:32:26.991053 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:32:26.991064 | orchestrator | 2025-11-01 12:32:26.991075 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-11-01 12:32:26.991086 | orchestrator | Saturday 01 November 2025 12:32:20 +0000 (0:00:00.827) 0:00:06.549 ***** 2025-11-01 12:32:26.991097 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-11-01 12:32:26.991108 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-11-01 12:32:26.991124 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-11-01 12:32:26.991134 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-11-01 12:32:26.991146 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-11-01 12:32:26.991157 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-11-01 12:32:26.991168 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-11-01 12:32:26.991178 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-11-01 12:32:26.991189 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-11-01 12:32:26.991200 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-11-01 12:32:26.991210 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-11-01 12:32:26.991219 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-11-01 12:32:26.991229 | orchestrator | 2025-11-01 12:32:26.991239 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-11-01 12:32:26.991248 | orchestrator | Saturday 01 November 2025 12:32:22 +0000 (0:00:01.209) 0:00:07.759 ***** 2025-11-01 12:32:26.991258 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:32:26.991267 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:32:26.991277 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:32:26.991286 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:32:26.991295 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:32:26.991305 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:32:26.991322 | orchestrator | 2025-11-01 12:32:26.991331 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-11-01 12:32:26.991342 | orchestrator | Saturday 01 November 2025 12:32:23 +0000 (0:00:01.304) 0:00:09.063 ***** 2025-11-01 12:32:26.991351 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-11-01 12:32:26.991361 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-11-01 12:32:26.991371 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-11-01 12:32:26.991380 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-11-01 12:32:26.991405 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-11-01 12:32:26.991432 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-11-01 12:32:26.991441 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-11-01 12:32:26.991451 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-11-01 12:32:26.991461 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-11-01 12:32:26.991470 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-11-01 12:32:26.991480 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-11-01 12:32:26.991489 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-11-01 12:32:26.991499 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-11-01 12:32:26.991508 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-11-01 12:32:26.991518 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-11-01 12:32:26.991527 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-11-01 12:32:26.991537 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-11-01 12:32:26.991546 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-11-01 12:32:26.991556 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-11-01 12:32:26.991566 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-11-01 12:32:26.991575 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-11-01 12:32:26.991585 | orchestrator | 2025-11-01 12:32:26.991594 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-11-01 12:32:26.991605 | orchestrator | Saturday 01 November 2025 12:32:24 +0000 (0:00:01.256) 0:00:10.319 ***** 2025-11-01 12:32:26.991614 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:32:26.991624 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:32:26.991633 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:32:26.991643 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:32:26.991652 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:32:26.991661 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:32:26.991671 | orchestrator | 2025-11-01 12:32:26.991680 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-11-01 12:32:26.991690 | orchestrator | Saturday 01 November 2025 12:32:24 +0000 (0:00:00.190) 0:00:10.510 ***** 2025-11-01 12:32:26.991700 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:32:26.991709 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:32:26.991718 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:32:26.991728 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:32:26.991738 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:32:26.991747 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:32:26.991757 | orchestrator | 2025-11-01 12:32:26.991766 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-11-01 12:32:26.991776 | orchestrator | Saturday 01 November 2025 12:32:25 +0000 (0:00:00.560) 0:00:11.071 ***** 2025-11-01 12:32:26.991786 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:32:26.991795 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:32:26.991811 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:32:26.991820 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:32:26.991830 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:32:26.991839 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:32:26.991849 | orchestrator | 2025-11-01 12:32:26.991858 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-11-01 12:32:26.991868 | orchestrator | Saturday 01 November 2025 12:32:25 +0000 (0:00:00.210) 0:00:11.282 ***** 2025-11-01 12:32:26.991878 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-11-01 12:32:26.991888 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:32:26.991897 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-01 12:32:26.991907 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:32:26.991916 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-01 12:32:26.991926 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:32:26.991935 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-01 12:32:26.991945 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:32:26.991954 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-11-01 12:32:26.991964 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:32:26.991974 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-01 12:32:26.991983 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:32:26.991993 | orchestrator | 2025-11-01 12:32:26.992003 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-11-01 12:32:26.992012 | orchestrator | Saturday 01 November 2025 12:32:26 +0000 (0:00:00.841) 0:00:12.124 ***** 2025-11-01 12:32:26.992022 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:32:26.992032 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:32:26.992041 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:32:26.992051 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:32:26.992060 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:32:26.992070 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:32:26.992079 | orchestrator | 2025-11-01 12:32:26.992089 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-11-01 12:32:26.992099 | orchestrator | Saturday 01 November 2025 12:32:26 +0000 (0:00:00.166) 0:00:12.291 ***** 2025-11-01 12:32:26.992109 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:32:26.992118 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:32:26.992128 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:32:26.992137 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:32:26.992147 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:32:26.992156 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:32:26.992166 | orchestrator | 2025-11-01 12:32:26.992176 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-11-01 12:32:26.992185 | orchestrator | Saturday 01 November 2025 12:32:26 +0000 (0:00:00.180) 0:00:12.472 ***** 2025-11-01 12:32:26.992195 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:32:26.992205 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:32:26.992214 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:32:26.992224 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:32:26.992240 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:32:28.215627 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:32:28.215700 | orchestrator | 2025-11-01 12:32:28.215712 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-11-01 12:32:28.215723 | orchestrator | Saturday 01 November 2025 12:32:26 +0000 (0:00:00.181) 0:00:12.653 ***** 2025-11-01 12:32:28.215733 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:32:28.215743 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:32:28.215752 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:32:28.215762 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:32:28.215771 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:32:28.215782 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:32:28.215791 | orchestrator | 2025-11-01 12:32:28.215801 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-11-01 12:32:28.215830 | orchestrator | Saturday 01 November 2025 12:32:27 +0000 (0:00:00.686) 0:00:13.339 ***** 2025-11-01 12:32:28.215840 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:32:28.215849 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:32:28.215859 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:32:28.215868 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:32:28.215878 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:32:28.215887 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:32:28.215897 | orchestrator | 2025-11-01 12:32:28.215907 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:32:28.215918 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 12:32:28.215929 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 12:32:28.215938 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 12:32:28.215948 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 12:32:28.215972 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 12:32:28.215982 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 12:32:28.215992 | orchestrator | 2025-11-01 12:32:28.216002 | orchestrator | 2025-11-01 12:32:28.216011 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:32:28.216021 | orchestrator | Saturday 01 November 2025 12:32:27 +0000 (0:00:00.248) 0:00:13.587 ***** 2025-11-01 12:32:28.216031 | orchestrator | =============================================================================== 2025-11-01 12:32:28.216041 | orchestrator | Gathering Facts --------------------------------------------------------- 3.55s 2025-11-01 12:32:28.216051 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.30s 2025-11-01 12:32:28.216060 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.26s 2025-11-01 12:32:28.216070 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.21s 2025-11-01 12:32:28.216080 | orchestrator | Do not require tty for all users ---------------------------------------- 0.86s 2025-11-01 12:32:28.216090 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.84s 2025-11-01 12:32:28.216103 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.83s 2025-11-01 12:32:28.216113 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.74s 2025-11-01 12:32:28.216123 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.69s 2025-11-01 12:32:28.216133 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2025-11-01 12:32:28.216142 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2025-11-01 12:32:28.216152 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.21s 2025-11-01 12:32:28.216161 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.21s 2025-11-01 12:32:28.216171 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.19s 2025-11-01 12:32:28.216181 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-11-01 12:32:28.216190 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2025-11-01 12:32:28.216200 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2025-11-01 12:32:28.216216 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2025-11-01 12:32:28.593144 | orchestrator | + osism apply --environment custom facts 2025-11-01 12:32:30.741342 | orchestrator | 2025-11-01 12:32:30 | INFO  | Trying to run play facts in environment custom 2025-11-01 12:32:40.827320 | orchestrator | 2025-11-01 12:32:40 | INFO  | Task 57c24c0d-311c-44b9-9cf5-f31708cb820c (facts) was prepared for execution. 2025-11-01 12:32:40.827469 | orchestrator | 2025-11-01 12:32:40 | INFO  | It takes a moment until task 57c24c0d-311c-44b9-9cf5-f31708cb820c (facts) has been started and output is visible here. 2025-11-01 12:33:30.794928 | orchestrator | 2025-11-01 12:33:30.795046 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-11-01 12:33:30.795071 | orchestrator | 2025-11-01 12:33:30.795091 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-11-01 12:33:30.795109 | orchestrator | Saturday 01 November 2025 12:32:45 +0000 (0:00:00.097) 0:00:00.097 ***** 2025-11-01 12:33:30.795127 | orchestrator | ok: [testbed-manager] 2025-11-01 12:33:30.795147 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:33:30.795166 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:33:30.795184 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:33:30.795202 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:33:30.795221 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:33:30.795237 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:33:30.795255 | orchestrator | 2025-11-01 12:33:30.795273 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-11-01 12:33:30.795292 | orchestrator | Saturday 01 November 2025 12:32:47 +0000 (0:00:01.505) 0:00:01.603 ***** 2025-11-01 12:33:30.795312 | orchestrator | ok: [testbed-manager] 2025-11-01 12:33:30.795331 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:33:30.795351 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:33:30.795371 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:33:30.795391 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:33:30.795464 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:33:30.795489 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:33:30.795510 | orchestrator | 2025-11-01 12:33:30.795533 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-11-01 12:33:30.795554 | orchestrator | 2025-11-01 12:33:30.795573 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-11-01 12:33:30.795593 | orchestrator | Saturday 01 November 2025 12:32:48 +0000 (0:00:01.217) 0:00:02.821 ***** 2025-11-01 12:33:30.795611 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:33:30.795631 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:33:30.795655 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:33:30.795674 | orchestrator | 2025-11-01 12:33:30.795691 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-11-01 12:33:30.795710 | orchestrator | Saturday 01 November 2025 12:32:48 +0000 (0:00:00.124) 0:00:02.946 ***** 2025-11-01 12:33:30.795727 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:33:30.795745 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:33:30.795763 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:33:30.795782 | orchestrator | 2025-11-01 12:33:30.795800 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-11-01 12:33:30.795820 | orchestrator | Saturday 01 November 2025 12:32:48 +0000 (0:00:00.207) 0:00:03.153 ***** 2025-11-01 12:33:30.795838 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:33:30.795855 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:33:30.795873 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:33:30.795890 | orchestrator | 2025-11-01 12:33:30.795906 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-11-01 12:33:30.795923 | orchestrator | Saturday 01 November 2025 12:32:48 +0000 (0:00:00.238) 0:00:03.391 ***** 2025-11-01 12:33:30.795941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 12:33:30.795993 | orchestrator | 2025-11-01 12:33:30.796011 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-11-01 12:33:30.796028 | orchestrator | Saturday 01 November 2025 12:32:49 +0000 (0:00:00.150) 0:00:03.542 ***** 2025-11-01 12:33:30.796045 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:33:30.796063 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:33:30.796079 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:33:30.796096 | orchestrator | 2025-11-01 12:33:30.796114 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-11-01 12:33:30.796132 | orchestrator | Saturday 01 November 2025 12:32:49 +0000 (0:00:00.427) 0:00:03.970 ***** 2025-11-01 12:33:30.796150 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:33:30.796168 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:33:30.796186 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:33:30.796204 | orchestrator | 2025-11-01 12:33:30.796237 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-11-01 12:33:30.796256 | orchestrator | Saturday 01 November 2025 12:32:49 +0000 (0:00:00.145) 0:00:04.116 ***** 2025-11-01 12:33:30.796274 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:33:30.796293 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:33:30.796311 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:33:30.796329 | orchestrator | 2025-11-01 12:33:30.796346 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-11-01 12:33:30.796362 | orchestrator | Saturday 01 November 2025 12:32:50 +0000 (0:00:01.058) 0:00:05.174 ***** 2025-11-01 12:33:30.796379 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:33:30.796396 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:33:30.796441 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:33:30.796459 | orchestrator | 2025-11-01 12:33:30.796476 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-11-01 12:33:30.796493 | orchestrator | Saturday 01 November 2025 12:32:51 +0000 (0:00:00.475) 0:00:05.650 ***** 2025-11-01 12:33:30.796511 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:33:30.796529 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:33:30.796546 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:33:30.796563 | orchestrator | 2025-11-01 12:33:30.796580 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-11-01 12:33:30.796598 | orchestrator | Saturday 01 November 2025 12:32:52 +0000 (0:00:01.035) 0:00:06.686 ***** 2025-11-01 12:33:30.796616 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:33:30.796636 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:33:30.796654 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:33:30.796670 | orchestrator | 2025-11-01 12:33:30.796687 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-11-01 12:33:30.796703 | orchestrator | Saturday 01 November 2025 12:33:12 +0000 (0:00:20.283) 0:00:26.970 ***** 2025-11-01 12:33:30.796721 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:33:30.796739 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:33:30.796756 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:33:30.796773 | orchestrator | 2025-11-01 12:33:30.796786 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-11-01 12:33:30.796823 | orchestrator | Saturday 01 November 2025 12:33:12 +0000 (0:00:00.145) 0:00:27.115 ***** 2025-11-01 12:33:30.796835 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:33:30.796846 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:33:30.796856 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:33:30.796867 | orchestrator | 2025-11-01 12:33:30.796878 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-11-01 12:33:30.796889 | orchestrator | Saturday 01 November 2025 12:33:21 +0000 (0:00:08.742) 0:00:35.858 ***** 2025-11-01 12:33:30.796899 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:33:30.796910 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:33:30.796921 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:33:30.796932 | orchestrator | 2025-11-01 12:33:30.796957 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-11-01 12:33:30.796968 | orchestrator | Saturday 01 November 2025 12:33:21 +0000 (0:00:00.468) 0:00:36.327 ***** 2025-11-01 12:33:30.796979 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-11-01 12:33:30.796990 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-11-01 12:33:30.797000 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-11-01 12:33:30.797011 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-11-01 12:33:30.797022 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-11-01 12:33:30.797033 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-11-01 12:33:30.797043 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-11-01 12:33:30.797054 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-11-01 12:33:30.797065 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-11-01 12:33:30.797075 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-11-01 12:33:30.797086 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-11-01 12:33:30.797097 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-11-01 12:33:30.797108 | orchestrator | 2025-11-01 12:33:30.797118 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-11-01 12:33:30.797129 | orchestrator | Saturday 01 November 2025 12:33:25 +0000 (0:00:03.494) 0:00:39.821 ***** 2025-11-01 12:33:30.797139 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:33:30.797150 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:33:30.797161 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:33:30.797171 | orchestrator | 2025-11-01 12:33:30.797182 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-01 12:33:30.797193 | orchestrator | 2025-11-01 12:33:30.797204 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-01 12:33:30.797215 | orchestrator | Saturday 01 November 2025 12:33:26 +0000 (0:00:01.475) 0:00:41.297 ***** 2025-11-01 12:33:30.797225 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:33:30.797236 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:33:30.797247 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:33:30.797257 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:33:30.797268 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:33:30.797278 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:33:30.797289 | orchestrator | ok: [testbed-manager] 2025-11-01 12:33:30.797299 | orchestrator | 2025-11-01 12:33:30.797310 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:33:30.797322 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:33:30.797334 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:33:30.797347 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:33:30.797394 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:33:30.797445 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:33:30.797460 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:33:30.797471 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:33:30.797490 | orchestrator | 2025-11-01 12:33:30.797500 | orchestrator | 2025-11-01 12:33:30.797511 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:33:30.797522 | orchestrator | Saturday 01 November 2025 12:33:30 +0000 (0:00:03.947) 0:00:45.244 ***** 2025-11-01 12:33:30.797533 | orchestrator | =============================================================================== 2025-11-01 12:33:30.797543 | orchestrator | osism.commons.repository : Update package cache ------------------------ 20.28s 2025-11-01 12:33:30.797553 | orchestrator | Install required packages (Debian) -------------------------------------- 8.74s 2025-11-01 12:33:30.797564 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.95s 2025-11-01 12:33:30.797575 | orchestrator | Copy fact files --------------------------------------------------------- 3.49s 2025-11-01 12:33:30.797585 | orchestrator | Create custom facts directory ------------------------------------------- 1.51s 2025-11-01 12:33:30.797596 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.48s 2025-11-01 12:33:30.797615 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2025-11-01 12:33:31.120124 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.06s 2025-11-01 12:33:31.120206 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.04s 2025-11-01 12:33:31.120218 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2025-11-01 12:33:31.120228 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2025-11-01 12:33:31.120237 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2025-11-01 12:33:31.120247 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.24s 2025-11-01 12:33:31.120257 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2025-11-01 12:33:31.120267 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-11-01 12:33:31.120277 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.15s 2025-11-01 12:33:31.120287 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.15s 2025-11-01 12:33:31.120297 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-11-01 12:33:31.487784 | orchestrator | + osism apply bootstrap 2025-11-01 12:33:43.827039 | orchestrator | 2025-11-01 12:33:43 | INFO  | Task 1418d5b7-9918-4d5d-a692-4a8816871bd5 (bootstrap) was prepared for execution. 2025-11-01 12:33:43.827148 | orchestrator | 2025-11-01 12:33:43 | INFO  | It takes a moment until task 1418d5b7-9918-4d5d-a692-4a8816871bd5 (bootstrap) has been started and output is visible here. 2025-11-01 12:34:01.517822 | orchestrator | 2025-11-01 12:34:01.517922 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-11-01 12:34:01.517939 | orchestrator | 2025-11-01 12:34:01.517951 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-11-01 12:34:01.517963 | orchestrator | Saturday 01 November 2025 12:33:48 +0000 (0:00:00.163) 0:00:00.163 ***** 2025-11-01 12:34:01.517974 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:34:01.517986 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:34:01.517997 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:34:01.518008 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:34:01.518066 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:34:01.518078 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:34:01.518089 | orchestrator | ok: [testbed-manager] 2025-11-01 12:34:01.518100 | orchestrator | 2025-11-01 12:34:01.518111 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-01 12:34:01.518122 | orchestrator | 2025-11-01 12:34:01.518133 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-01 12:34:01.518144 | orchestrator | Saturday 01 November 2025 12:33:49 +0000 (0:00:00.320) 0:00:00.484 ***** 2025-11-01 12:34:01.518155 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:34:01.518165 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:34:01.518196 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:34:01.518207 | orchestrator | ok: [testbed-manager] 2025-11-01 12:34:01.518218 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:34:01.518229 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:34:01.518240 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:34:01.518250 | orchestrator | 2025-11-01 12:34:01.518262 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-11-01 12:34:01.518273 | orchestrator | 2025-11-01 12:34:01.518283 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-01 12:34:01.518294 | orchestrator | Saturday 01 November 2025 12:33:53 +0000 (0:00:03.863) 0:00:04.347 ***** 2025-11-01 12:34:01.518305 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-01 12:34:01.518317 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-01 12:34:01.518328 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-01 12:34:01.518351 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-11-01 12:34:01.518362 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-11-01 12:34:01.518373 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-11-01 12:34:01.518384 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-11-01 12:34:01.518421 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-11-01 12:34:01.518434 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-11-01 12:34:01.518445 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-11-01 12:34:01.518456 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-11-01 12:34:01.518467 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-11-01 12:34:01.518478 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-11-01 12:34:01.518489 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-01 12:34:01.518499 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-11-01 12:34:01.518510 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-11-01 12:34:01.518520 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:34:01.518531 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-11-01 12:34:01.518541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-01 12:34:01.518552 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-11-01 12:34:01.518562 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-11-01 12:34:01.518573 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-11-01 12:34:01.518583 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-11-01 12:34:01.518594 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:34:01.518605 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-01 12:34:01.518615 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-11-01 12:34:01.518626 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-11-01 12:34:01.518636 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-11-01 12:34:01.518647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 12:34:01.518658 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-11-01 12:34:01.518668 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-11-01 12:34:01.518679 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-11-01 12:34:01.518689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 12:34:01.518700 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-11-01 12:34:01.518710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 12:34:01.518721 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-11-01 12:34:01.518731 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-11-01 12:34:01.518751 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-11-01 12:34:01.518762 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:34:01.518773 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-11-01 12:34:01.518783 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-11-01 12:34:01.518794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-11-01 12:34:01.518805 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:34:01.518816 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-11-01 12:34:01.518826 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-11-01 12:34:01.518837 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-11-01 12:34:01.518864 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-11-01 12:34:01.518876 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-11-01 12:34:01.518886 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-11-01 12:34:01.518897 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-11-01 12:34:01.518908 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-11-01 12:34:01.518919 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-11-01 12:34:01.518930 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:34:01.518940 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:34:01.518951 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-11-01 12:34:01.518962 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:34:01.518973 | orchestrator | 2025-11-01 12:34:01.518984 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-11-01 12:34:01.518995 | orchestrator | 2025-11-01 12:34:01.519006 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-11-01 12:34:01.519017 | orchestrator | Saturday 01 November 2025 12:33:53 +0000 (0:00:00.508) 0:00:04.855 ***** 2025-11-01 12:34:01.519028 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:34:01.519038 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:34:01.519049 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:34:01.519060 | orchestrator | ok: [testbed-manager] 2025-11-01 12:34:01.519071 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:34:01.519081 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:34:01.519092 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:34:01.519103 | orchestrator | 2025-11-01 12:34:01.519114 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-11-01 12:34:01.519124 | orchestrator | Saturday 01 November 2025 12:33:54 +0000 (0:00:01.322) 0:00:06.178 ***** 2025-11-01 12:34:01.519135 | orchestrator | ok: [testbed-manager] 2025-11-01 12:34:01.519146 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:34:01.519157 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:34:01.519167 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:34:01.519178 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:34:01.519189 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:34:01.519199 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:34:01.519210 | orchestrator | 2025-11-01 12:34:01.519221 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-11-01 12:34:01.519232 | orchestrator | Saturday 01 November 2025 12:33:56 +0000 (0:00:01.389) 0:00:07.568 ***** 2025-11-01 12:34:01.519243 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:34:01.519256 | orchestrator | 2025-11-01 12:34:01.519267 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-11-01 12:34:01.519278 | orchestrator | Saturday 01 November 2025 12:33:56 +0000 (0:00:00.313) 0:00:07.881 ***** 2025-11-01 12:34:01.519289 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:34:01.519300 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:34:01.519310 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:34:01.519328 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:34:01.519339 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:34:01.519349 | orchestrator | changed: [testbed-manager] 2025-11-01 12:34:01.519360 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:34:01.519371 | orchestrator | 2025-11-01 12:34:01.519381 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-11-01 12:34:01.519392 | orchestrator | Saturday 01 November 2025 12:33:58 +0000 (0:00:02.140) 0:00:10.023 ***** 2025-11-01 12:34:01.519429 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:34:01.519452 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 12:34:01.519472 | orchestrator | 2025-11-01 12:34:01.519484 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-11-01 12:34:01.519494 | orchestrator | Saturday 01 November 2025 12:33:59 +0000 (0:00:00.323) 0:00:10.346 ***** 2025-11-01 12:34:01.519505 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:34:01.519516 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:34:01.519526 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:34:01.519537 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:34:01.519547 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:34:01.519558 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:34:01.519568 | orchestrator | 2025-11-01 12:34:01.519579 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-11-01 12:34:01.519590 | orchestrator | Saturday 01 November 2025 12:34:00 +0000 (0:00:01.084) 0:00:11.431 ***** 2025-11-01 12:34:01.519601 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:34:01.519612 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:34:01.519622 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:34:01.519633 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:34:01.519643 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:34:01.519654 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:34:01.519665 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:34:01.519675 | orchestrator | 2025-11-01 12:34:01.519686 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-11-01 12:34:01.519696 | orchestrator | Saturday 01 November 2025 12:34:00 +0000 (0:00:00.610) 0:00:12.041 ***** 2025-11-01 12:34:01.519707 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:34:01.519718 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:34:01.519728 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:34:01.519746 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:34:01.519758 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:34:01.519768 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:34:01.519779 | orchestrator | ok: [testbed-manager] 2025-11-01 12:34:01.519790 | orchestrator | 2025-11-01 12:34:01.519801 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-11-01 12:34:01.519812 | orchestrator | Saturday 01 November 2025 12:34:01 +0000 (0:00:00.621) 0:00:12.662 ***** 2025-11-01 12:34:01.519823 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:34:01.519834 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:34:01.519853 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:34:14.377459 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:34:14.377574 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:34:14.377596 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:34:14.377614 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:34:14.377632 | orchestrator | 2025-11-01 12:34:14.377650 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-11-01 12:34:14.377670 | orchestrator | Saturday 01 November 2025 12:34:01 +0000 (0:00:00.258) 0:00:12.920 ***** 2025-11-01 12:34:14.377688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:34:14.377750 | orchestrator | 2025-11-01 12:34:14.377763 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-11-01 12:34:14.377775 | orchestrator | Saturday 01 November 2025 12:34:01 +0000 (0:00:00.334) 0:00:13.255 ***** 2025-11-01 12:34:14.377786 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:34:14.377798 | orchestrator | 2025-11-01 12:34:14.377809 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-11-01 12:34:14.377820 | orchestrator | Saturday 01 November 2025 12:34:02 +0000 (0:00:00.460) 0:00:13.715 ***** 2025-11-01 12:34:14.377831 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:34:14.377842 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:34:14.377853 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:34:14.377864 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:34:14.377874 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:34:14.377885 | orchestrator | ok: [testbed-manager] 2025-11-01 12:34:14.377904 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:34:14.377915 | orchestrator | 2025-11-01 12:34:14.377927 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-11-01 12:34:14.377939 | orchestrator | Saturday 01 November 2025 12:34:03 +0000 (0:00:01.262) 0:00:14.977 ***** 2025-11-01 12:34:14.377951 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:34:14.377964 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:34:14.377976 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:34:14.377988 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:34:14.378000 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:34:14.378012 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:34:14.378077 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:34:14.378090 | orchestrator | 2025-11-01 12:34:14.378102 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-11-01 12:34:14.378114 | orchestrator | Saturday 01 November 2025 12:34:03 +0000 (0:00:00.230) 0:00:15.208 ***** 2025-11-01 12:34:14.378126 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:34:14.378138 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:34:14.378150 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:34:14.378162 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:34:14.378174 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:34:14.378186 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:34:14.378198 | orchestrator | ok: [testbed-manager] 2025-11-01 12:34:14.378210 | orchestrator | 2025-11-01 12:34:14.378222 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-11-01 12:34:14.378234 | orchestrator | Saturday 01 November 2025 12:34:04 +0000 (0:00:00.586) 0:00:15.795 ***** 2025-11-01 12:34:14.378246 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:34:14.378259 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:34:14.378272 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:34:14.378284 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:34:14.378296 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:34:14.378307 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:34:14.378317 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:34:14.378328 | orchestrator | 2025-11-01 12:34:14.378339 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-11-01 12:34:14.378351 | orchestrator | Saturday 01 November 2025 12:34:04 +0000 (0:00:00.280) 0:00:16.075 ***** 2025-11-01 12:34:14.378362 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:34:14.378372 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:34:14.378383 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:34:14.378419 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:34:14.378430 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:34:14.378441 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:34:14.378460 | orchestrator | ok: [testbed-manager] 2025-11-01 12:34:14.378471 | orchestrator | 2025-11-01 12:34:14.378482 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-11-01 12:34:14.378493 | orchestrator | Saturday 01 November 2025 12:34:05 +0000 (0:00:00.618) 0:00:16.693 ***** 2025-11-01 12:34:14.378504 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:34:14.378515 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:34:14.378525 | orchestrator | ok: [testbed-manager] 2025-11-01 12:34:14.378536 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:34:14.378547 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:34:14.378557 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:34:14.378568 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:34:14.378579 | orchestrator | 2025-11-01 12:34:14.378590 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-11-01 12:34:14.378600 | orchestrator | Saturday 01 November 2025 12:34:06 +0000 (0:00:01.133) 0:00:17.827 ***** 2025-11-01 12:34:14.378611 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:34:14.378622 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:34:14.378633 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:34:14.378643 | orchestrator | ok: [testbed-manager] 2025-11-01 12:34:14.378654 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:34:14.378664 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:34:14.378675 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:34:14.378686 | orchestrator | 2025-11-01 12:34:14.378696 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-11-01 12:34:14.378707 | orchestrator | Saturday 01 November 2025 12:34:07 +0000 (0:00:01.168) 0:00:18.995 ***** 2025-11-01 12:34:14.378737 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:34:14.378749 | orchestrator | 2025-11-01 12:34:14.378760 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-11-01 12:34:14.378771 | orchestrator | Saturday 01 November 2025 12:34:08 +0000 (0:00:00.336) 0:00:19.331 ***** 2025-11-01 12:34:14.378782 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:34:14.378793 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:34:14.378804 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:34:14.378814 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:34:14.378825 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:34:14.378835 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:34:14.378846 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:34:14.378857 | orchestrator | 2025-11-01 12:34:14.378867 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-11-01 12:34:14.378878 | orchestrator | Saturday 01 November 2025 12:34:09 +0000 (0:00:01.369) 0:00:20.701 ***** 2025-11-01 12:34:14.378889 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:34:14.378900 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:34:14.378910 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:34:14.378921 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:34:14.378932 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:34:14.378942 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:34:14.378953 | orchestrator | ok: [testbed-manager] 2025-11-01 12:34:14.378964 | orchestrator | 2025-11-01 12:34:14.378974 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-11-01 12:34:14.378985 | orchestrator | Saturday 01 November 2025 12:34:09 +0000 (0:00:00.269) 0:00:20.970 ***** 2025-11-01 12:34:14.378996 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:34:14.379007 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:34:14.379017 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:34:14.379028 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:34:14.379038 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:34:14.379049 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:34:14.379065 | orchestrator | ok: [testbed-manager] 2025-11-01 12:34:14.379076 | orchestrator | 2025-11-01 12:34:14.379087 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-11-01 12:34:14.379105 | orchestrator | Saturday 01 November 2025 12:34:09 +0000 (0:00:00.281) 0:00:21.251 ***** 2025-11-01 12:34:14.379116 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:34:14.379126 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:34:14.379137 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:34:14.379147 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:34:14.379158 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:34:14.379168 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:34:14.379179 | orchestrator | ok: [testbed-manager] 2025-11-01 12:34:14.379190 | orchestrator | 2025-11-01 12:34:14.379201 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-11-01 12:34:14.379212 | orchestrator | Saturday 01 November 2025 12:34:10 +0000 (0:00:00.255) 0:00:21.507 ***** 2025-11-01 12:34:14.379223 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:34:14.379236 | orchestrator | 2025-11-01 12:34:14.379247 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-11-01 12:34:14.379257 | orchestrator | Saturday 01 November 2025 12:34:10 +0000 (0:00:00.354) 0:00:21.861 ***** 2025-11-01 12:34:14.379268 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:34:14.379279 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:34:14.379290 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:34:14.379301 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:34:14.379311 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:34:14.379322 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:34:14.379333 | orchestrator | ok: [testbed-manager] 2025-11-01 12:34:14.379343 | orchestrator | 2025-11-01 12:34:14.379354 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-11-01 12:34:14.379365 | orchestrator | Saturday 01 November 2025 12:34:11 +0000 (0:00:00.632) 0:00:22.494 ***** 2025-11-01 12:34:14.379376 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:34:14.379387 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:34:14.379423 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:34:14.379434 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:34:14.379445 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:34:14.379456 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:34:14.379466 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:34:14.379477 | orchestrator | 2025-11-01 12:34:14.379488 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-11-01 12:34:14.379499 | orchestrator | Saturday 01 November 2025 12:34:11 +0000 (0:00:00.282) 0:00:22.776 ***** 2025-11-01 12:34:14.379509 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:34:14.379520 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:34:14.379531 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:34:14.379541 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:34:14.379552 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:34:14.379563 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:34:14.379573 | orchestrator | ok: [testbed-manager] 2025-11-01 12:34:14.379584 | orchestrator | 2025-11-01 12:34:14.379595 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-11-01 12:34:14.379606 | orchestrator | Saturday 01 November 2025 12:34:12 +0000 (0:00:01.090) 0:00:23.867 ***** 2025-11-01 12:34:14.379617 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:34:14.379627 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:34:14.379638 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:34:14.379648 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:34:14.379659 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:34:14.379670 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:34:14.379680 | orchestrator | ok: [testbed-manager] 2025-11-01 12:34:14.379691 | orchestrator | 2025-11-01 12:34:14.379702 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-11-01 12:34:14.379713 | orchestrator | Saturday 01 November 2025 12:34:13 +0000 (0:00:00.743) 0:00:24.610 ***** 2025-11-01 12:34:14.379730 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:34:14.379741 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:34:14.379752 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:34:14.379763 | orchestrator | ok: [testbed-manager] 2025-11-01 12:34:14.379780 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:35:01.480959 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:35:01.481080 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:35:01.481099 | orchestrator | 2025-11-01 12:35:01.481113 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-11-01 12:35:01.481126 | orchestrator | Saturday 01 November 2025 12:34:14 +0000 (0:00:01.069) 0:00:25.680 ***** 2025-11-01 12:35:01.481138 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:35:01.481149 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:35:01.481160 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:35:01.481171 | orchestrator | changed: [testbed-manager] 2025-11-01 12:35:01.481182 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:35:01.481193 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:35:01.481204 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:35:01.481215 | orchestrator | 2025-11-01 12:35:01.481226 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-11-01 12:35:01.481238 | orchestrator | Saturday 01 November 2025 12:34:33 +0000 (0:00:19.538) 0:00:45.218 ***** 2025-11-01 12:35:01.481249 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:35:01.481260 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:35:01.481271 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:35:01.481281 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:35:01.481292 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:35:01.481303 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:35:01.481314 | orchestrator | ok: [testbed-manager] 2025-11-01 12:35:01.481325 | orchestrator | 2025-11-01 12:35:01.481336 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-11-01 12:35:01.481347 | orchestrator | Saturday 01 November 2025 12:34:34 +0000 (0:00:00.255) 0:00:45.474 ***** 2025-11-01 12:35:01.481358 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:35:01.481422 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:35:01.481434 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:35:01.481445 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:35:01.481456 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:35:01.481467 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:35:01.481478 | orchestrator | ok: [testbed-manager] 2025-11-01 12:35:01.481488 | orchestrator | 2025-11-01 12:35:01.481500 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-11-01 12:35:01.481511 | orchestrator | Saturday 01 November 2025 12:34:34 +0000 (0:00:00.262) 0:00:45.736 ***** 2025-11-01 12:35:01.481522 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:35:01.481533 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:35:01.481544 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:35:01.481555 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:35:01.481565 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:35:01.481577 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:35:01.481587 | orchestrator | ok: [testbed-manager] 2025-11-01 12:35:01.481598 | orchestrator | 2025-11-01 12:35:01.481609 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-11-01 12:35:01.481620 | orchestrator | Saturday 01 November 2025 12:34:34 +0000 (0:00:00.280) 0:00:46.017 ***** 2025-11-01 12:35:01.481633 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:35:01.481647 | orchestrator | 2025-11-01 12:35:01.481658 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-11-01 12:35:01.481669 | orchestrator | Saturday 01 November 2025 12:34:35 +0000 (0:00:00.331) 0:00:46.348 ***** 2025-11-01 12:35:01.481679 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:35:01.481713 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:35:01.481724 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:35:01.481735 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:35:01.481746 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:35:01.481757 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:35:01.481767 | orchestrator | ok: [testbed-manager] 2025-11-01 12:35:01.481778 | orchestrator | 2025-11-01 12:35:01.481789 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-11-01 12:35:01.481800 | orchestrator | Saturday 01 November 2025 12:34:36 +0000 (0:00:01.839) 0:00:48.188 ***** 2025-11-01 12:35:01.481811 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:35:01.481822 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:35:01.481832 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:35:01.481843 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:35:01.481854 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:35:01.481864 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:35:01.481875 | orchestrator | changed: [testbed-manager] 2025-11-01 12:35:01.481886 | orchestrator | 2025-11-01 12:35:01.481897 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-11-01 12:35:01.481925 | orchestrator | Saturday 01 November 2025 12:34:37 +0000 (0:00:01.121) 0:00:49.310 ***** 2025-11-01 12:35:01.481937 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:35:01.481948 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:35:01.481958 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:35:01.481969 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:35:01.481980 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:35:01.481990 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:35:01.482001 | orchestrator | ok: [testbed-manager] 2025-11-01 12:35:01.482066 | orchestrator | 2025-11-01 12:35:01.482083 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-11-01 12:35:01.482094 | orchestrator | Saturday 01 November 2025 12:34:38 +0000 (0:00:00.949) 0:00:50.259 ***** 2025-11-01 12:35:01.482106 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:35:01.482119 | orchestrator | 2025-11-01 12:35:01.482130 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-11-01 12:35:01.482150 | orchestrator | Saturday 01 November 2025 12:34:39 +0000 (0:00:00.354) 0:00:50.613 ***** 2025-11-01 12:35:01.482162 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:35:01.482173 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:35:01.482184 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:35:01.482194 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:35:01.482205 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:35:01.482216 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:35:01.482226 | orchestrator | changed: [testbed-manager] 2025-11-01 12:35:01.482237 | orchestrator | 2025-11-01 12:35:01.482266 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-11-01 12:35:01.482278 | orchestrator | Saturday 01 November 2025 12:34:40 +0000 (0:00:01.108) 0:00:51.722 ***** 2025-11-01 12:35:01.482289 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:35:01.482300 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:35:01.482311 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:35:01.482321 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:35:01.482332 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:35:01.482343 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:35:01.482353 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:35:01.482364 | orchestrator | 2025-11-01 12:35:01.482395 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2025-11-01 12:35:01.482406 | orchestrator | Saturday 01 November 2025 12:34:40 +0000 (0:00:00.313) 0:00:52.035 ***** 2025-11-01 12:35:01.482418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:35:01.482439 | orchestrator | 2025-11-01 12:35:01.482451 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2025-11-01 12:35:01.482461 | orchestrator | Saturday 01 November 2025 12:34:41 +0000 (0:00:00.367) 0:00:52.403 ***** 2025-11-01 12:35:01.482472 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:35:01.482483 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:35:01.482494 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:35:01.482505 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:35:01.482516 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:35:01.482527 | orchestrator | ok: [testbed-manager] 2025-11-01 12:35:01.482537 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:35:01.482548 | orchestrator | 2025-11-01 12:35:01.482559 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2025-11-01 12:35:01.482575 | orchestrator | Saturday 01 November 2025 12:34:42 +0000 (0:00:01.824) 0:00:54.227 ***** 2025-11-01 12:35:01.482586 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:35:01.482597 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:35:01.482608 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:35:01.482619 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:35:01.482630 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:35:01.482640 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:35:01.482651 | orchestrator | changed: [testbed-manager] 2025-11-01 12:35:01.482662 | orchestrator | 2025-11-01 12:35:01.482672 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-11-01 12:35:01.482683 | orchestrator | Saturday 01 November 2025 12:34:44 +0000 (0:00:01.348) 0:00:55.576 ***** 2025-11-01 12:35:01.482694 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:35:01.482705 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:35:01.482715 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:35:01.482726 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:35:01.482737 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:35:01.482748 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:35:01.482758 | orchestrator | changed: [testbed-manager] 2025-11-01 12:35:01.482769 | orchestrator | 2025-11-01 12:35:01.482780 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-11-01 12:35:01.482790 | orchestrator | Saturday 01 November 2025 12:34:58 +0000 (0:00:13.795) 0:01:09.371 ***** 2025-11-01 12:35:01.482801 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:35:01.482812 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:35:01.482823 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:35:01.482833 | orchestrator | ok: [testbed-manager] 2025-11-01 12:35:01.482844 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:35:01.482855 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:35:01.482865 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:35:01.482876 | orchestrator | 2025-11-01 12:35:01.482887 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-11-01 12:35:01.482898 | orchestrator | Saturday 01 November 2025 12:34:59 +0000 (0:00:01.614) 0:01:10.985 ***** 2025-11-01 12:35:01.482909 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:35:01.482919 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:35:01.482930 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:35:01.482941 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:35:01.482951 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:35:01.482962 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:35:01.482973 | orchestrator | ok: [testbed-manager] 2025-11-01 12:35:01.482983 | orchestrator | 2025-11-01 12:35:01.482994 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-11-01 12:35:01.483005 | orchestrator | Saturday 01 November 2025 12:35:00 +0000 (0:00:00.932) 0:01:11.918 ***** 2025-11-01 12:35:01.483016 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:35:01.483027 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:35:01.483037 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:35:01.483048 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:35:01.483059 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:35:01.483077 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:35:01.483087 | orchestrator | ok: [testbed-manager] 2025-11-01 12:35:01.483098 | orchestrator | 2025-11-01 12:35:01.483109 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-11-01 12:35:01.483120 | orchestrator | Saturday 01 November 2025 12:35:00 +0000 (0:00:00.270) 0:01:12.188 ***** 2025-11-01 12:35:01.483131 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:35:01.483141 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:35:01.483152 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:35:01.483163 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:35:01.483173 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:35:01.483184 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:35:01.483195 | orchestrator | ok: [testbed-manager] 2025-11-01 12:35:01.483205 | orchestrator | 2025-11-01 12:35:01.483216 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-11-01 12:35:01.483227 | orchestrator | Saturday 01 November 2025 12:35:01 +0000 (0:00:00.261) 0:01:12.449 ***** 2025-11-01 12:35:01.483238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:35:01.483250 | orchestrator | 2025-11-01 12:35:01.483268 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-11-01 12:37:09.468770 | orchestrator | Saturday 01 November 2025 12:35:01 +0000 (0:00:00.332) 0:01:12.782 ***** 2025-11-01 12:37:09.468873 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:37:09.468889 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:37:09.468900 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:37:09.468911 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:37:09.468922 | orchestrator | ok: [testbed-manager] 2025-11-01 12:37:09.468933 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:37:09.468944 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:37:09.468955 | orchestrator | 2025-11-01 12:37:09.468967 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-11-01 12:37:09.468978 | orchestrator | Saturday 01 November 2025 12:35:03 +0000 (0:00:01.827) 0:01:14.609 ***** 2025-11-01 12:37:09.468989 | orchestrator | changed: [testbed-manager] 2025-11-01 12:37:09.469001 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:37:09.469012 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:37:09.469023 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:37:09.469033 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:37:09.469044 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:37:09.469055 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:37:09.469066 | orchestrator | 2025-11-01 12:37:09.469077 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-11-01 12:37:09.469089 | orchestrator | Saturday 01 November 2025 12:35:04 +0000 (0:00:00.746) 0:01:15.356 ***** 2025-11-01 12:37:09.469100 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:37:09.469111 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:37:09.469122 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:37:09.469134 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:37:09.469145 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:37:09.469155 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:37:09.469166 | orchestrator | ok: [testbed-manager] 2025-11-01 12:37:09.469177 | orchestrator | 2025-11-01 12:37:09.469188 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-11-01 12:37:09.469199 | orchestrator | Saturday 01 November 2025 12:35:04 +0000 (0:00:00.265) 0:01:15.621 ***** 2025-11-01 12:37:09.469210 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:37:09.469237 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:37:09.469248 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:37:09.469259 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:37:09.469270 | orchestrator | ok: [testbed-manager] 2025-11-01 12:37:09.469281 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:37:09.469292 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:37:09.469351 | orchestrator | 2025-11-01 12:37:09.469366 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-11-01 12:37:09.469379 | orchestrator | Saturday 01 November 2025 12:35:05 +0000 (0:00:01.356) 0:01:16.978 ***** 2025-11-01 12:37:09.469392 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:37:09.469404 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:37:09.469417 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:37:09.469429 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:37:09.469442 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:37:09.469455 | orchestrator | changed: [testbed-manager] 2025-11-01 12:37:09.469467 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:37:09.469479 | orchestrator | 2025-11-01 12:37:09.469491 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-11-01 12:37:09.469504 | orchestrator | Saturday 01 November 2025 12:35:07 +0000 (0:00:02.005) 0:01:18.984 ***** 2025-11-01 12:37:09.469516 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:37:09.469529 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:37:09.469541 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:37:09.469553 | orchestrator | ok: [testbed-manager] 2025-11-01 12:37:09.469565 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:37:09.469577 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:37:09.469589 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:37:09.469601 | orchestrator | 2025-11-01 12:37:09.469614 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-11-01 12:37:09.469627 | orchestrator | Saturday 01 November 2025 12:35:10 +0000 (0:00:03.254) 0:01:22.238 ***** 2025-11-01 12:37:09.469640 | orchestrator | ok: [testbed-manager] 2025-11-01 12:37:09.469653 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:37:09.469665 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:37:09.469676 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:37:09.469687 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:37:09.469697 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:37:09.469708 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:37:09.469719 | orchestrator | 2025-11-01 12:37:09.469729 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-11-01 12:37:09.469740 | orchestrator | Saturday 01 November 2025 12:35:31 +0000 (0:00:20.517) 0:01:42.755 ***** 2025-11-01 12:37:09.469751 | orchestrator | changed: [testbed-manager] 2025-11-01 12:37:09.469762 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:37:09.469772 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:37:09.469783 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:37:09.469794 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:37:09.469805 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:37:09.469815 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:37:09.469826 | orchestrator | 2025-11-01 12:37:09.469837 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-11-01 12:37:09.469848 | orchestrator | Saturday 01 November 2025 12:36:52 +0000 (0:01:21.487) 0:03:04.243 ***** 2025-11-01 12:37:09.469858 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:37:09.469869 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:37:09.469879 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:37:09.469890 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:37:09.469900 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:37:09.469911 | orchestrator | ok: [testbed-manager] 2025-11-01 12:37:09.469922 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:37:09.469932 | orchestrator | 2025-11-01 12:37:09.469943 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-11-01 12:37:09.469954 | orchestrator | Saturday 01 November 2025 12:36:54 +0000 (0:00:01.849) 0:03:06.092 ***** 2025-11-01 12:37:09.469964 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:37:09.469975 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:37:09.469985 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:37:09.469996 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:37:09.470007 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:37:09.470068 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:37:09.470089 | orchestrator | changed: [testbed-manager] 2025-11-01 12:37:09.470100 | orchestrator | 2025-11-01 12:37:09.470111 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-11-01 12:37:09.470122 | orchestrator | Saturday 01 November 2025 12:37:08 +0000 (0:00:13.388) 0:03:19.480 ***** 2025-11-01 12:37:09.470161 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-11-01 12:37:09.470179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-11-01 12:37:09.470194 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-11-01 12:37:09.470213 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-11-01 12:37:09.470225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-11-01 12:37:09.470236 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-11-01 12:37:09.470248 | orchestrator | 2025-11-01 12:37:09.470259 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-11-01 12:37:09.470270 | orchestrator | Saturday 01 November 2025 12:37:08 +0000 (0:00:00.443) 0:03:19.924 ***** 2025-11-01 12:37:09.470281 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-11-01 12:37:09.470292 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:37:09.470303 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-11-01 12:37:09.470333 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-11-01 12:37:09.470344 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:37:09.470355 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:37:09.470366 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-11-01 12:37:09.470377 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:37:09.470388 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-01 12:37:09.470399 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-01 12:37:09.470417 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-01 12:37:09.470428 | orchestrator | 2025-11-01 12:37:09.470438 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-11-01 12:37:09.470449 | orchestrator | Saturday 01 November 2025 12:37:09 +0000 (0:00:00.689) 0:03:20.613 ***** 2025-11-01 12:37:09.470460 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-11-01 12:37:09.470472 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-11-01 12:37:09.470490 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-11-01 12:37:09.470502 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-11-01 12:37:09.470516 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-11-01 12:37:09.470534 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-11-01 12:37:16.030185 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-11-01 12:37:16.030288 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-11-01 12:37:16.030304 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-11-01 12:37:16.030361 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-11-01 12:37:16.030373 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-11-01 12:37:16.030385 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-11-01 12:37:16.030396 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-11-01 12:37:16.030408 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-11-01 12:37:16.030419 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-11-01 12:37:16.030429 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-11-01 12:37:16.030440 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-11-01 12:37:16.030452 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-11-01 12:37:16.030478 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-11-01 12:37:16.030490 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-11-01 12:37:16.030501 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-11-01 12:37:16.030512 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-11-01 12:37:16.030523 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-11-01 12:37:16.030534 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-11-01 12:37:16.030545 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-11-01 12:37:16.030556 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-11-01 12:37:16.030567 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-11-01 12:37:16.030578 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-11-01 12:37:16.030589 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-11-01 12:37:16.030620 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-11-01 12:37:16.030632 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:37:16.030644 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:37:16.030654 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:37:16.030665 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-11-01 12:37:16.030676 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-11-01 12:37:16.030687 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-11-01 12:37:16.030698 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-11-01 12:37:16.030709 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-11-01 12:37:16.030719 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-11-01 12:37:16.030732 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-11-01 12:37:16.030745 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-11-01 12:37:16.030757 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-11-01 12:37:16.030770 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-11-01 12:37:16.030783 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:37:16.030797 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-11-01 12:37:16.030809 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-11-01 12:37:16.030822 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-11-01 12:37:16.030834 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-11-01 12:37:16.030847 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-11-01 12:37:16.030877 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-11-01 12:37:16.030890 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-11-01 12:37:16.030903 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-11-01 12:37:16.030915 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-11-01 12:37:16.030928 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-11-01 12:37:16.030940 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-11-01 12:37:16.030953 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-11-01 12:37:16.030965 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-11-01 12:37:16.030978 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-11-01 12:37:16.030990 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-11-01 12:37:16.031003 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-11-01 12:37:16.031016 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-11-01 12:37:16.031028 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-11-01 12:37:16.031045 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-11-01 12:37:16.031069 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-11-01 12:37:16.031081 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-11-01 12:37:16.031092 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-11-01 12:37:16.031103 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-11-01 12:37:16.031114 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-11-01 12:37:16.031126 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-11-01 12:37:16.031136 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-11-01 12:37:16.031147 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-11-01 12:37:16.031158 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-11-01 12:37:16.031169 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-11-01 12:37:16.031180 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-11-01 12:37:16.031191 | orchestrator | 2025-11-01 12:37:16.031203 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-11-01 12:37:16.031214 | orchestrator | Saturday 01 November 2025 12:37:14 +0000 (0:00:05.520) 0:03:26.134 ***** 2025-11-01 12:37:16.031225 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-01 12:37:16.031235 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-01 12:37:16.031246 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-01 12:37:16.031257 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-01 12:37:16.031268 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-01 12:37:16.031278 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-01 12:37:16.031289 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-11-01 12:37:16.031300 | orchestrator | 2025-11-01 12:37:16.031340 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-11-01 12:37:16.031352 | orchestrator | Saturday 01 November 2025 12:37:15 +0000 (0:00:00.624) 0:03:26.758 ***** 2025-11-01 12:37:16.031363 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-01 12:37:16.031374 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:37:16.031385 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-01 12:37:16.031396 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:37:16.031407 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-01 12:37:16.031418 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:37:16.031429 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-01 12:37:16.031439 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:37:16.031450 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-01 12:37:16.031461 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-01 12:37:16.031485 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-01 12:37:30.165535 | orchestrator | 2025-11-01 12:37:30.165648 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2025-11-01 12:37:30.165689 | orchestrator | Saturday 01 November 2025 12:37:16 +0000 (0:00:00.573) 0:03:27.332 ***** 2025-11-01 12:37:30.165701 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-01 12:37:30.165714 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-01 12:37:30.165725 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:37:30.165736 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-01 12:37:30.165747 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:37:30.165758 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:37:30.165769 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-11-01 12:37:30.165780 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:37:30.165791 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-01 12:37:30.165801 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-01 12:37:30.165812 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-11-01 12:37:30.165823 | orchestrator | 2025-11-01 12:37:30.165834 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-11-01 12:37:30.165860 | orchestrator | Saturday 01 November 2025 12:37:16 +0000 (0:00:00.459) 0:03:27.791 ***** 2025-11-01 12:37:30.165871 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-11-01 12:37:30.165882 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:37:30.165893 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-11-01 12:37:30.165904 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:37:30.165914 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-11-01 12:37:30.165926 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:37:30.165936 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-11-01 12:37:30.165947 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:37:30.165958 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-11-01 12:37:30.165969 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-11-01 12:37:30.165979 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-11-01 12:37:30.165990 | orchestrator | 2025-11-01 12:37:30.166001 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-11-01 12:37:30.166012 | orchestrator | Saturday 01 November 2025 12:37:17 +0000 (0:00:00.709) 0:03:28.500 ***** 2025-11-01 12:37:30.166080 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:37:30.166092 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:37:30.166104 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:37:30.166116 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:37:30.166128 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:37:30.166139 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:37:30.166151 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:37:30.166162 | orchestrator | 2025-11-01 12:37:30.166175 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-11-01 12:37:30.166186 | orchestrator | Saturday 01 November 2025 12:37:17 +0000 (0:00:00.344) 0:03:28.845 ***** 2025-11-01 12:37:30.166199 | orchestrator | ok: [testbed-manager] 2025-11-01 12:37:30.166212 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:37:30.166224 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:37:30.166236 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:37:30.166248 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:37:30.166265 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:37:30.166277 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:37:30.166289 | orchestrator | 2025-11-01 12:37:30.166301 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-11-01 12:37:30.166341 | orchestrator | Saturday 01 November 2025 12:37:23 +0000 (0:00:06.201) 0:03:35.046 ***** 2025-11-01 12:37:30.166354 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-11-01 12:37:30.166367 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-11-01 12:37:30.166379 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:37:30.166391 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-11-01 12:37:30.166404 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:37:30.166416 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:37:30.166428 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-11-01 12:37:30.166439 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-11-01 12:37:30.166449 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:37:30.166460 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-11-01 12:37:30.166471 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:37:30.166481 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:37:30.166492 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-11-01 12:37:30.166503 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:37:30.166514 | orchestrator | 2025-11-01 12:37:30.166525 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-11-01 12:37:30.166535 | orchestrator | Saturday 01 November 2025 12:37:24 +0000 (0:00:00.357) 0:03:35.403 ***** 2025-11-01 12:37:30.166546 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-11-01 12:37:30.166557 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-11-01 12:37:30.166568 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-11-01 12:37:30.166595 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-11-01 12:37:30.166607 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-11-01 12:37:30.166617 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-11-01 12:37:30.166628 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-11-01 12:37:30.166638 | orchestrator | 2025-11-01 12:37:30.166649 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-11-01 12:37:30.166660 | orchestrator | Saturday 01 November 2025 12:37:25 +0000 (0:00:01.125) 0:03:36.529 ***** 2025-11-01 12:37:30.166672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:37:30.166685 | orchestrator | 2025-11-01 12:37:30.166696 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-11-01 12:37:30.166706 | orchestrator | Saturday 01 November 2025 12:37:25 +0000 (0:00:00.478) 0:03:37.008 ***** 2025-11-01 12:37:30.166717 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:37:30.166728 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:37:30.166738 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:37:30.166749 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:37:30.166759 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:37:30.166770 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:37:30.166780 | orchestrator | ok: [testbed-manager] 2025-11-01 12:37:30.166791 | orchestrator | 2025-11-01 12:37:30.166802 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-11-01 12:37:30.166812 | orchestrator | Saturday 01 November 2025 12:37:27 +0000 (0:00:01.397) 0:03:38.405 ***** 2025-11-01 12:37:30.166823 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:37:30.166833 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:37:30.166844 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:37:30.166854 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:37:30.166865 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:37:30.166875 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:37:30.166886 | orchestrator | ok: [testbed-manager] 2025-11-01 12:37:30.166903 | orchestrator | 2025-11-01 12:37:30.166915 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-11-01 12:37:30.166925 | orchestrator | Saturday 01 November 2025 12:37:27 +0000 (0:00:00.651) 0:03:39.056 ***** 2025-11-01 12:37:30.166936 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:37:30.166946 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:37:30.166957 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:37:30.166968 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:37:30.166978 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:37:30.166989 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:37:30.166999 | orchestrator | changed: [testbed-manager] 2025-11-01 12:37:30.167010 | orchestrator | 2025-11-01 12:37:30.167021 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-11-01 12:37:30.167032 | orchestrator | Saturday 01 November 2025 12:37:28 +0000 (0:00:00.675) 0:03:39.732 ***** 2025-11-01 12:37:30.167042 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:37:30.167053 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:37:30.167063 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:37:30.167082 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:37:30.167093 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:37:30.167104 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:37:30.167114 | orchestrator | ok: [testbed-manager] 2025-11-01 12:37:30.167125 | orchestrator | 2025-11-01 12:37:30.167135 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-11-01 12:37:30.167146 | orchestrator | Saturday 01 November 2025 12:37:29 +0000 (0:00:00.641) 0:03:40.373 ***** 2025-11-01 12:37:30.167161 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1761999063.583271, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 12:37:30.167176 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1761999058.3627195, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 12:37:30.167187 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1761999061.8426878, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 12:37:30.167221 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1761999069.752064, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 12:37:35.411794 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1761999061.7032948, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 12:37:35.411924 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1761999061.380652, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 12:37:35.411941 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1761999026.6988046, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 12:37:35.411954 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 12:37:35.411965 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 12:37:35.411977 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 12:37:35.411988 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 12:37:35.412017 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 12:37:35.412041 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 12:37:35.412053 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 12:37:35.412065 | orchestrator | 2025-11-01 12:37:35.412078 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-11-01 12:37:35.412092 | orchestrator | Saturday 01 November 2025 12:37:30 +0000 (0:00:01.092) 0:03:41.465 ***** 2025-11-01 12:37:35.412104 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:37:35.412115 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:37:35.412126 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:37:35.412137 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:37:35.412147 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:37:35.412158 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:37:35.412169 | orchestrator | changed: [testbed-manager] 2025-11-01 12:37:35.412180 | orchestrator | 2025-11-01 12:37:35.412191 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-11-01 12:37:35.412202 | orchestrator | Saturday 01 November 2025 12:37:31 +0000 (0:00:01.214) 0:03:42.679 ***** 2025-11-01 12:37:35.412213 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:37:35.412224 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:37:35.412235 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:37:35.412245 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:37:35.412256 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:37:35.412267 | orchestrator | changed: [testbed-manager] 2025-11-01 12:37:35.412277 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:37:35.412288 | orchestrator | 2025-11-01 12:37:35.412299 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-11-01 12:37:35.412352 | orchestrator | Saturday 01 November 2025 12:37:32 +0000 (0:00:01.177) 0:03:43.857 ***** 2025-11-01 12:37:35.412365 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:37:35.412376 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:37:35.412389 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:37:35.412402 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:37:35.412413 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:37:35.412426 | orchestrator | changed: [testbed-manager] 2025-11-01 12:37:35.412438 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:37:35.412450 | orchestrator | 2025-11-01 12:37:35.412462 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-11-01 12:37:35.412475 | orchestrator | Saturday 01 November 2025 12:37:33 +0000 (0:00:01.184) 0:03:45.042 ***** 2025-11-01 12:37:35.412494 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:37:35.412507 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:37:35.412519 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:37:35.412531 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:37:35.412543 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:37:35.412555 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:37:35.412567 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:37:35.412579 | orchestrator | 2025-11-01 12:37:35.412591 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-11-01 12:37:35.412604 | orchestrator | Saturday 01 November 2025 12:37:34 +0000 (0:00:00.300) 0:03:45.342 ***** 2025-11-01 12:37:35.412616 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:37:35.412629 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:37:35.412641 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:37:35.412652 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:37:35.412664 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:37:35.412676 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:37:35.412688 | orchestrator | ok: [testbed-manager] 2025-11-01 12:37:35.412699 | orchestrator | 2025-11-01 12:37:35.412710 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-11-01 12:37:35.412721 | orchestrator | Saturday 01 November 2025 12:37:34 +0000 (0:00:00.795) 0:03:46.137 ***** 2025-11-01 12:37:35.412734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:37:35.412748 | orchestrator | 2025-11-01 12:37:35.412759 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-11-01 12:37:35.412777 | orchestrator | Saturday 01 November 2025 12:37:35 +0000 (0:00:00.576) 0:03:46.713 ***** 2025-11-01 12:38:58.972821 | orchestrator | ok: [testbed-manager] 2025-11-01 12:38:58.972939 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:38:58.972955 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:38:58.972967 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:38:58.972978 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:38:58.972988 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:38:58.972999 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:38:58.973011 | orchestrator | 2025-11-01 12:38:58.973023 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-11-01 12:38:58.973035 | orchestrator | Saturday 01 November 2025 12:37:44 +0000 (0:00:08.819) 0:03:55.532 ***** 2025-11-01 12:38:58.973047 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:38:58.973058 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:38:58.973068 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:38:58.973079 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:38:58.973090 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:38:58.973100 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:38:58.973111 | orchestrator | ok: [testbed-manager] 2025-11-01 12:38:58.973122 | orchestrator | 2025-11-01 12:38:58.973150 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-11-01 12:38:58.973161 | orchestrator | Saturday 01 November 2025 12:37:45 +0000 (0:00:01.337) 0:03:56.870 ***** 2025-11-01 12:38:58.973172 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:38:58.973183 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:38:58.973194 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:38:58.973205 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:38:58.973216 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:38:58.973226 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:38:58.973237 | orchestrator | ok: [testbed-manager] 2025-11-01 12:38:58.973248 | orchestrator | 2025-11-01 12:38:58.973259 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-11-01 12:38:58.973272 | orchestrator | Saturday 01 November 2025 12:37:46 +0000 (0:00:01.102) 0:03:57.973 ***** 2025-11-01 12:38:58.973330 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:38:58.973341 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:38:58.973372 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:38:58.973385 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:38:58.973397 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:38:58.973409 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:38:58.973421 | orchestrator | ok: [testbed-manager] 2025-11-01 12:38:58.973433 | orchestrator | 2025-11-01 12:38:58.973445 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-11-01 12:38:58.973459 | orchestrator | Saturday 01 November 2025 12:37:46 +0000 (0:00:00.340) 0:03:58.314 ***** 2025-11-01 12:38:58.973472 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:38:58.973483 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:38:58.973495 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:38:58.973506 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:38:58.973518 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:38:58.973530 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:38:58.973542 | orchestrator | ok: [testbed-manager] 2025-11-01 12:38:58.973554 | orchestrator | 2025-11-01 12:38:58.973566 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-11-01 12:38:58.973578 | orchestrator | Saturday 01 November 2025 12:37:47 +0000 (0:00:00.350) 0:03:58.665 ***** 2025-11-01 12:38:58.973590 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:38:58.973602 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:38:58.973614 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:38:58.973626 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:38:58.973638 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:38:58.973650 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:38:58.973662 | orchestrator | ok: [testbed-manager] 2025-11-01 12:38:58.973674 | orchestrator | 2025-11-01 12:38:58.973686 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-11-01 12:38:58.973698 | orchestrator | Saturday 01 November 2025 12:37:47 +0000 (0:00:00.361) 0:03:59.026 ***** 2025-11-01 12:38:58.973710 | orchestrator | ok: [testbed-manager] 2025-11-01 12:38:58.973723 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:38:58.973733 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:38:58.973744 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:38:58.973754 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:38:58.973765 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:38:58.973776 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:38:58.973786 | orchestrator | 2025-11-01 12:38:58.973797 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-11-01 12:38:58.973808 | orchestrator | Saturday 01 November 2025 12:37:53 +0000 (0:00:05.887) 0:04:04.914 ***** 2025-11-01 12:38:58.973820 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:38:58.973834 | orchestrator | 2025-11-01 12:38:58.973845 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-11-01 12:38:58.973856 | orchestrator | Saturday 01 November 2025 12:37:54 +0000 (0:00:00.468) 0:04:05.383 ***** 2025-11-01 12:38:58.973867 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-11-01 12:38:58.973877 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-11-01 12:38:58.973888 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-11-01 12:38:58.973899 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-11-01 12:38:58.973910 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:38:58.973921 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-11-01 12:38:58.973932 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-11-01 12:38:58.973943 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:38:58.973953 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-11-01 12:38:58.973964 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-11-01 12:38:58.973975 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:38:58.973994 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:38:58.974005 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-11-01 12:38:58.974068 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-11-01 12:38:58.974083 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-11-01 12:38:58.974094 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-11-01 12:38:58.974155 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:38:58.974169 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:38:58.974180 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-11-01 12:38:58.974191 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-11-01 12:38:58.974202 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:38:58.974212 | orchestrator | 2025-11-01 12:38:58.974223 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-11-01 12:38:58.974234 | orchestrator | Saturday 01 November 2025 12:37:54 +0000 (0:00:00.372) 0:04:05.756 ***** 2025-11-01 12:38:58.974246 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:38:58.974257 | orchestrator | 2025-11-01 12:38:58.974268 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-11-01 12:38:58.974303 | orchestrator | Saturday 01 November 2025 12:37:54 +0000 (0:00:00.482) 0:04:06.238 ***** 2025-11-01 12:38:58.974315 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-11-01 12:38:58.974326 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-11-01 12:38:58.974337 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:38:58.974348 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-11-01 12:38:58.974358 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:38:58.974369 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-11-01 12:38:58.974380 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:38:58.974391 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:38:58.974402 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-11-01 12:38:58.974413 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-11-01 12:38:58.974423 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:38:58.974434 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:38:58.974445 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-11-01 12:38:58.974456 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:38:58.974467 | orchestrator | 2025-11-01 12:38:58.974478 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-11-01 12:38:58.974498 | orchestrator | Saturday 01 November 2025 12:37:55 +0000 (0:00:00.356) 0:04:06.594 ***** 2025-11-01 12:38:58.974510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:38:58.974521 | orchestrator | 2025-11-01 12:38:58.974532 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-11-01 12:38:58.974543 | orchestrator | Saturday 01 November 2025 12:37:55 +0000 (0:00:00.452) 0:04:07.047 ***** 2025-11-01 12:38:58.974553 | orchestrator | changed: [testbed-manager] 2025-11-01 12:38:58.974565 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:38:58.974575 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:38:58.974586 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:38:58.974597 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:38:58.974607 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:38:58.974618 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:38:58.974629 | orchestrator | 2025-11-01 12:38:58.974640 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-11-01 12:38:58.974659 | orchestrator | Saturday 01 November 2025 12:38:32 +0000 (0:00:37.132) 0:04:44.180 ***** 2025-11-01 12:38:58.974670 | orchestrator | changed: [testbed-manager] 2025-11-01 12:38:58.974681 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:38:58.974691 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:38:58.974702 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:38:58.974713 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:38:58.974723 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:38:58.974734 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:38:58.974744 | orchestrator | 2025-11-01 12:38:58.974755 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-11-01 12:38:58.974766 | orchestrator | Saturday 01 November 2025 12:38:41 +0000 (0:00:08.526) 0:04:52.707 ***** 2025-11-01 12:38:58.974777 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:38:58.974788 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:38:58.974798 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:38:58.974809 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:38:58.974819 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:38:58.974830 | orchestrator | changed: [testbed-manager] 2025-11-01 12:38:58.974841 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:38:58.974851 | orchestrator | 2025-11-01 12:38:58.974862 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-11-01 12:38:58.974873 | orchestrator | Saturday 01 November 2025 12:38:50 +0000 (0:00:08.830) 0:05:01.537 ***** 2025-11-01 12:38:58.974884 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:38:58.974895 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:38:58.974905 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:38:58.974916 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:38:58.974927 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:38:58.974937 | orchestrator | ok: [testbed-manager] 2025-11-01 12:38:58.974948 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:38:58.974959 | orchestrator | 2025-11-01 12:38:58.974970 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-11-01 12:38:58.974980 | orchestrator | Saturday 01 November 2025 12:38:52 +0000 (0:00:01.865) 0:05:03.403 ***** 2025-11-01 12:38:58.974991 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:38:58.975002 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:38:58.975012 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:38:58.975023 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:38:58.975034 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:38:58.975045 | orchestrator | changed: [testbed-manager] 2025-11-01 12:38:58.975055 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:38:58.975066 | orchestrator | 2025-11-01 12:38:58.975084 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-11-01 12:39:11.278096 | orchestrator | Saturday 01 November 2025 12:38:58 +0000 (0:00:06.865) 0:05:10.269 ***** 2025-11-01 12:39:11.278208 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:39:11.278226 | orchestrator | 2025-11-01 12:39:11.278240 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-11-01 12:39:11.278251 | orchestrator | Saturday 01 November 2025 12:38:59 +0000 (0:00:00.468) 0:05:10.737 ***** 2025-11-01 12:39:11.278263 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:39:11.278316 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:39:11.278328 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:39:11.278355 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:39:11.278367 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:39:11.278377 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:39:11.278388 | orchestrator | changed: [testbed-manager] 2025-11-01 12:39:11.278399 | orchestrator | 2025-11-01 12:39:11.278410 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-11-01 12:39:11.278440 | orchestrator | Saturday 01 November 2025 12:39:00 +0000 (0:00:00.796) 0:05:11.534 ***** 2025-11-01 12:39:11.278452 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:39:11.278464 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:39:11.278475 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:39:11.278485 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:39:11.278496 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:39:11.278507 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:39:11.278518 | orchestrator | ok: [testbed-manager] 2025-11-01 12:39:11.278528 | orchestrator | 2025-11-01 12:39:11.278539 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-11-01 12:39:11.278550 | orchestrator | Saturday 01 November 2025 12:39:02 +0000 (0:00:01.866) 0:05:13.400 ***** 2025-11-01 12:39:11.278562 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:39:11.278573 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:39:11.278583 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:39:11.278595 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:39:11.278608 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:39:11.278621 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:39:11.278633 | orchestrator | changed: [testbed-manager] 2025-11-01 12:39:11.278646 | orchestrator | 2025-11-01 12:39:11.278659 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-11-01 12:39:11.278671 | orchestrator | Saturday 01 November 2025 12:39:02 +0000 (0:00:00.872) 0:05:14.273 ***** 2025-11-01 12:39:11.278684 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:39:11.278696 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:39:11.278708 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:39:11.278720 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:39:11.278732 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:39:11.278744 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:39:11.278756 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:39:11.278769 | orchestrator | 2025-11-01 12:39:11.278781 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-11-01 12:39:11.278794 | orchestrator | Saturday 01 November 2025 12:39:03 +0000 (0:00:00.352) 0:05:14.626 ***** 2025-11-01 12:39:11.278806 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:39:11.278818 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:39:11.278831 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:39:11.278844 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:39:11.278855 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:39:11.278868 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:39:11.278880 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:39:11.278892 | orchestrator | 2025-11-01 12:39:11.278904 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-11-01 12:39:11.278917 | orchestrator | Saturday 01 November 2025 12:39:03 +0000 (0:00:00.472) 0:05:15.098 ***** 2025-11-01 12:39:11.278929 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:39:11.278941 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:39:11.278954 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:39:11.278965 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:39:11.278976 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:39:11.278987 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:39:11.278998 | orchestrator | ok: [testbed-manager] 2025-11-01 12:39:11.279008 | orchestrator | 2025-11-01 12:39:11.279019 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-11-01 12:39:11.279030 | orchestrator | Saturday 01 November 2025 12:39:04 +0000 (0:00:00.350) 0:05:15.449 ***** 2025-11-01 12:39:11.279041 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:39:11.279052 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:39:11.279062 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:39:11.279073 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:39:11.279084 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:39:11.279094 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:39:11.279105 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:39:11.279123 | orchestrator | 2025-11-01 12:39:11.279134 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-11-01 12:39:11.279146 | orchestrator | Saturday 01 November 2025 12:39:04 +0000 (0:00:00.336) 0:05:15.786 ***** 2025-11-01 12:39:11.279157 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:39:11.279167 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:39:11.279178 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:39:11.279189 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:39:11.279200 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:39:11.279210 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:39:11.279221 | orchestrator | ok: [testbed-manager] 2025-11-01 12:39:11.279232 | orchestrator | 2025-11-01 12:39:11.279243 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-11-01 12:39:11.279254 | orchestrator | Saturday 01 November 2025 12:39:04 +0000 (0:00:00.354) 0:05:16.141 ***** 2025-11-01 12:39:11.279264 | orchestrator | ok: [testbed-node-0] =>  2025-11-01 12:39:11.279296 | orchestrator |  docker_version: 5:27.5.1 2025-11-01 12:39:11.279308 | orchestrator | ok: [testbed-node-1] =>  2025-11-01 12:39:11.279318 | orchestrator |  docker_version: 5:27.5.1 2025-11-01 12:39:11.279329 | orchestrator | ok: [testbed-node-2] =>  2025-11-01 12:39:11.279340 | orchestrator |  docker_version: 5:27.5.1 2025-11-01 12:39:11.279351 | orchestrator | ok: [testbed-node-3] =>  2025-11-01 12:39:11.279361 | orchestrator |  docker_version: 5:27.5.1 2025-11-01 12:39:11.279388 | orchestrator | ok: [testbed-node-4] =>  2025-11-01 12:39:11.279400 | orchestrator |  docker_version: 5:27.5.1 2025-11-01 12:39:11.279411 | orchestrator | ok: [testbed-node-5] =>  2025-11-01 12:39:11.279422 | orchestrator |  docker_version: 5:27.5.1 2025-11-01 12:39:11.279433 | orchestrator | ok: [testbed-manager] =>  2025-11-01 12:39:11.279443 | orchestrator |  docker_version: 5:27.5.1 2025-11-01 12:39:11.279454 | orchestrator | 2025-11-01 12:39:11.279465 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-11-01 12:39:11.279476 | orchestrator | Saturday 01 November 2025 12:39:05 +0000 (0:00:00.321) 0:05:16.463 ***** 2025-11-01 12:39:11.279487 | orchestrator | ok: [testbed-node-0] =>  2025-11-01 12:39:11.279497 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-01 12:39:11.279508 | orchestrator | ok: [testbed-node-1] =>  2025-11-01 12:39:11.279519 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-01 12:39:11.279529 | orchestrator | ok: [testbed-node-2] =>  2025-11-01 12:39:11.279546 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-01 12:39:11.279557 | orchestrator | ok: [testbed-node-3] =>  2025-11-01 12:39:11.279568 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-01 12:39:11.279578 | orchestrator | ok: [testbed-node-4] =>  2025-11-01 12:39:11.279589 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-01 12:39:11.279600 | orchestrator | ok: [testbed-node-5] =>  2025-11-01 12:39:11.279611 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-01 12:39:11.279621 | orchestrator | ok: [testbed-manager] =>  2025-11-01 12:39:11.279632 | orchestrator |  docker_cli_version: 5:27.5.1 2025-11-01 12:39:11.279643 | orchestrator | 2025-11-01 12:39:11.279654 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-11-01 12:39:11.279665 | orchestrator | Saturday 01 November 2025 12:39:05 +0000 (0:00:00.344) 0:05:16.808 ***** 2025-11-01 12:39:11.279676 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:39:11.279686 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:39:11.279697 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:39:11.279708 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:39:11.279718 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:39:11.279729 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:39:11.279740 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:39:11.279751 | orchestrator | 2025-11-01 12:39:11.279761 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-11-01 12:39:11.279772 | orchestrator | Saturday 01 November 2025 12:39:05 +0000 (0:00:00.421) 0:05:17.229 ***** 2025-11-01 12:39:11.279783 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:39:11.279801 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:39:11.279811 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:39:11.279822 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:39:11.279833 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:39:11.279843 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:39:11.279854 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:39:11.279865 | orchestrator | 2025-11-01 12:39:11.279876 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-11-01 12:39:11.279887 | orchestrator | Saturday 01 November 2025 12:39:06 +0000 (0:00:00.309) 0:05:17.539 ***** 2025-11-01 12:39:11.279899 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:39:11.279912 | orchestrator | 2025-11-01 12:39:11.279923 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-11-01 12:39:11.279934 | orchestrator | Saturday 01 November 2025 12:39:06 +0000 (0:00:00.499) 0:05:18.038 ***** 2025-11-01 12:39:11.279945 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:39:11.279956 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:39:11.279967 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:39:11.279977 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:39:11.279988 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:39:11.279999 | orchestrator | ok: [testbed-manager] 2025-11-01 12:39:11.280010 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:39:11.280020 | orchestrator | 2025-11-01 12:39:11.280031 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-11-01 12:39:11.280042 | orchestrator | Saturday 01 November 2025 12:39:07 +0000 (0:00:00.905) 0:05:18.944 ***** 2025-11-01 12:39:11.280053 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:39:11.280064 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:39:11.280074 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:39:11.280085 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:39:11.280096 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:39:11.280107 | orchestrator | ok: [testbed-manager] 2025-11-01 12:39:11.280117 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:39:11.280128 | orchestrator | 2025-11-01 12:39:11.280139 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-11-01 12:39:11.280151 | orchestrator | Saturday 01 November 2025 12:39:10 +0000 (0:00:03.217) 0:05:22.162 ***** 2025-11-01 12:39:11.280162 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-11-01 12:39:11.280174 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-11-01 12:39:11.280185 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-11-01 12:39:11.280196 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:39:11.280207 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-11-01 12:39:11.280218 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-11-01 12:39:11.280229 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-11-01 12:39:11.280239 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-11-01 12:39:11.280250 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-11-01 12:39:11.280261 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-11-01 12:39:11.280290 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:39:11.280301 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-11-01 12:39:11.280312 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-11-01 12:39:11.280323 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-11-01 12:39:11.280334 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:39:11.280345 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-11-01 12:39:11.280362 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-11-01 12:40:15.140520 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-11-01 12:40:15.140623 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:40:15.140633 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-11-01 12:40:15.140640 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-11-01 12:40:15.140646 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-11-01 12:40:15.140652 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:40:15.140659 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:40:15.140665 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-11-01 12:40:15.140672 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-11-01 12:40:15.140678 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-11-01 12:40:15.140684 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:40:15.140691 | orchestrator | 2025-11-01 12:40:15.140709 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-11-01 12:40:15.140717 | orchestrator | Saturday 01 November 2025 12:39:11 +0000 (0:00:00.685) 0:05:22.847 ***** 2025-11-01 12:40:15.140724 | orchestrator | ok: [testbed-manager] 2025-11-01 12:40:15.140730 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:40:15.140736 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:40:15.140742 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:40:15.140749 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:40:15.140755 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:40:15.140761 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:40:15.140767 | orchestrator | 2025-11-01 12:40:15.140773 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-11-01 12:40:15.140779 | orchestrator | Saturday 01 November 2025 12:39:19 +0000 (0:00:07.849) 0:05:30.696 ***** 2025-11-01 12:40:15.140786 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:40:15.140792 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:40:15.140798 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:40:15.140804 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:40:15.140810 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:40:15.140816 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:40:15.140822 | orchestrator | ok: [testbed-manager] 2025-11-01 12:40:15.140828 | orchestrator | 2025-11-01 12:40:15.140835 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-11-01 12:40:15.140841 | orchestrator | Saturday 01 November 2025 12:39:20 +0000 (0:00:01.152) 0:05:31.849 ***** 2025-11-01 12:40:15.140847 | orchestrator | ok: [testbed-manager] 2025-11-01 12:40:15.140853 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:40:15.140859 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:40:15.140865 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:40:15.140871 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:40:15.140877 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:40:15.140883 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:40:15.140889 | orchestrator | 2025-11-01 12:40:15.140895 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-11-01 12:40:15.140901 | orchestrator | Saturday 01 November 2025 12:39:28 +0000 (0:00:08.109) 0:05:39.958 ***** 2025-11-01 12:40:15.140908 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:40:15.140914 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:40:15.140920 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:40:15.140926 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:40:15.140932 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:40:15.140938 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:40:15.140944 | orchestrator | changed: [testbed-manager] 2025-11-01 12:40:15.140950 | orchestrator | 2025-11-01 12:40:15.140956 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-11-01 12:40:15.140963 | orchestrator | Saturday 01 November 2025 12:39:32 +0000 (0:00:03.548) 0:05:43.507 ***** 2025-11-01 12:40:15.140969 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:40:15.140975 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:40:15.140981 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:40:15.140992 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:40:15.140998 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:40:15.141004 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:40:15.141010 | orchestrator | ok: [testbed-manager] 2025-11-01 12:40:15.141016 | orchestrator | 2025-11-01 12:40:15.141023 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-11-01 12:40:15.141029 | orchestrator | Saturday 01 November 2025 12:39:33 +0000 (0:00:01.576) 0:05:45.083 ***** 2025-11-01 12:40:15.141035 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:40:15.141041 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:40:15.141047 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:40:15.141053 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:40:15.141059 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:40:15.141065 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:40:15.141071 | orchestrator | ok: [testbed-manager] 2025-11-01 12:40:15.141077 | orchestrator | 2025-11-01 12:40:15.141084 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-11-01 12:40:15.141090 | orchestrator | Saturday 01 November 2025 12:39:35 +0000 (0:00:01.377) 0:05:46.461 ***** 2025-11-01 12:40:15.141096 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:40:15.141102 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:40:15.141108 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:40:15.141114 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:40:15.141120 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:40:15.141126 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:40:15.141132 | orchestrator | changed: [testbed-manager] 2025-11-01 12:40:15.141138 | orchestrator | 2025-11-01 12:40:15.141144 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-11-01 12:40:15.141150 | orchestrator | Saturday 01 November 2025 12:39:36 +0000 (0:00:01.198) 0:05:47.659 ***** 2025-11-01 12:40:15.141156 | orchestrator | ok: [testbed-manager] 2025-11-01 12:40:15.141162 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:40:15.141168 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:40:15.141174 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:40:15.141180 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:40:15.141187 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:40:15.141192 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:40:15.141198 | orchestrator | 2025-11-01 12:40:15.141205 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-11-01 12:40:15.141222 | orchestrator | Saturday 01 November 2025 12:39:46 +0000 (0:00:09.936) 0:05:57.595 ***** 2025-11-01 12:40:15.141229 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:40:15.141235 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:40:15.141241 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:40:15.141247 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:40:15.141271 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:40:15.141277 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:40:15.141283 | orchestrator | changed: [testbed-manager] 2025-11-01 12:40:15.141289 | orchestrator | 2025-11-01 12:40:15.141296 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-11-01 12:40:15.141302 | orchestrator | Saturday 01 November 2025 12:39:47 +0000 (0:00:00.995) 0:05:58.591 ***** 2025-11-01 12:40:15.141308 | orchestrator | ok: [testbed-manager] 2025-11-01 12:40:15.141314 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:40:15.141320 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:40:15.141326 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:40:15.141333 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:40:15.141339 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:40:15.141345 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:40:15.141351 | orchestrator | 2025-11-01 12:40:15.141357 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-11-01 12:40:15.141363 | orchestrator | Saturday 01 November 2025 12:39:56 +0000 (0:00:09.542) 0:06:08.133 ***** 2025-11-01 12:40:15.141375 | orchestrator | ok: [testbed-manager] 2025-11-01 12:40:15.141381 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:40:15.141387 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:40:15.141393 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:40:15.141399 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:40:15.141405 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:40:15.141411 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:40:15.141417 | orchestrator | 2025-11-01 12:40:15.141424 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-11-01 12:40:15.141430 | orchestrator | Saturday 01 November 2025 12:40:08 +0000 (0:00:11.268) 0:06:19.402 ***** 2025-11-01 12:40:15.141436 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-11-01 12:40:15.141442 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-11-01 12:40:15.141448 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-11-01 12:40:15.141455 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-11-01 12:40:15.141461 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-11-01 12:40:15.141467 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-11-01 12:40:15.141473 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-11-01 12:40:15.141479 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-11-01 12:40:15.141485 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-11-01 12:40:15.141491 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-11-01 12:40:15.141498 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-11-01 12:40:15.141504 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-11-01 12:40:15.141510 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-11-01 12:40:15.141516 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-11-01 12:40:15.141522 | orchestrator | 2025-11-01 12:40:15.141528 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-11-01 12:40:15.141535 | orchestrator | Saturday 01 November 2025 12:40:09 +0000 (0:00:01.337) 0:06:20.740 ***** 2025-11-01 12:40:15.141541 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:40:15.141547 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:40:15.141553 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:40:15.141559 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:40:15.141565 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:40:15.141571 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:40:15.141577 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:40:15.141584 | orchestrator | 2025-11-01 12:40:15.141590 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-11-01 12:40:15.141596 | orchestrator | Saturday 01 November 2025 12:40:10 +0000 (0:00:00.621) 0:06:21.361 ***** 2025-11-01 12:40:15.141602 | orchestrator | ok: [testbed-manager] 2025-11-01 12:40:15.141609 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:40:15.141615 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:40:15.141621 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:40:15.141627 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:40:15.141633 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:40:15.141639 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:40:15.141645 | orchestrator | 2025-11-01 12:40:15.141651 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-11-01 12:40:15.141658 | orchestrator | Saturday 01 November 2025 12:40:13 +0000 (0:00:03.836) 0:06:25.197 ***** 2025-11-01 12:40:15.141665 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:40:15.141671 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:40:15.141677 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:40:15.141683 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:40:15.141689 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:40:15.141695 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:40:15.141701 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:40:15.141707 | orchestrator | 2025-11-01 12:40:15.141718 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-11-01 12:40:15.141724 | orchestrator | Saturday 01 November 2025 12:40:14 +0000 (0:00:00.855) 0:06:26.053 ***** 2025-11-01 12:40:15.141731 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-11-01 12:40:15.141737 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-11-01 12:40:15.141743 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:40:15.141749 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-11-01 12:40:15.141755 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-11-01 12:40:15.141762 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:40:15.141797 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-11-01 12:40:15.141805 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-11-01 12:40:15.141811 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:40:15.141821 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-11-01 12:40:35.688302 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-11-01 12:40:35.688411 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:40:35.688428 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-11-01 12:40:35.688441 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-11-01 12:40:35.688452 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:40:35.688464 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-11-01 12:40:35.688475 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-11-01 12:40:35.688486 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:40:35.688497 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-11-01 12:40:35.688508 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-11-01 12:40:35.688544 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:40:35.688556 | orchestrator | 2025-11-01 12:40:35.688568 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-11-01 12:40:35.688580 | orchestrator | Saturday 01 November 2025 12:40:15 +0000 (0:00:00.701) 0:06:26.754 ***** 2025-11-01 12:40:35.688591 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:40:35.688602 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:40:35.688613 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:40:35.688624 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:40:35.688635 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:40:35.688646 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:40:35.688656 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:40:35.688667 | orchestrator | 2025-11-01 12:40:35.688678 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-11-01 12:40:35.688689 | orchestrator | Saturday 01 November 2025 12:40:16 +0000 (0:00:00.606) 0:06:27.361 ***** 2025-11-01 12:40:35.688700 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:40:35.688711 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:40:35.688721 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:40:35.688732 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:40:35.688743 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:40:35.688754 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:40:35.688764 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:40:35.688775 | orchestrator | 2025-11-01 12:40:35.688786 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-11-01 12:40:35.688797 | orchestrator | Saturday 01 November 2025 12:40:16 +0000 (0:00:00.572) 0:06:27.933 ***** 2025-11-01 12:40:35.688808 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:40:35.688820 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:40:35.688833 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:40:35.688845 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:40:35.688857 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:40:35.688869 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:40:35.688901 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:40:35.688913 | orchestrator | 2025-11-01 12:40:35.688926 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-11-01 12:40:35.688938 | orchestrator | Saturday 01 November 2025 12:40:17 +0000 (0:00:00.641) 0:06:28.574 ***** 2025-11-01 12:40:35.688951 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:40:35.688963 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:40:35.688976 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:40:35.688987 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:40:35.689005 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:40:35.689024 | orchestrator | ok: [testbed-manager] 2025-11-01 12:40:35.689044 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:40:35.689063 | orchestrator | 2025-11-01 12:40:35.689083 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-11-01 12:40:35.689102 | orchestrator | Saturday 01 November 2025 12:40:19 +0000 (0:00:01.928) 0:06:30.503 ***** 2025-11-01 12:40:35.689122 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:40:35.689146 | orchestrator | 2025-11-01 12:40:35.689166 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-11-01 12:40:35.689182 | orchestrator | Saturday 01 November 2025 12:40:20 +0000 (0:00:00.919) 0:06:31.422 ***** 2025-11-01 12:40:35.689193 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:40:35.689204 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:40:35.689215 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:40:35.689226 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:40:35.689236 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:40:35.689267 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:40:35.689279 | orchestrator | ok: [testbed-manager] 2025-11-01 12:40:35.689290 | orchestrator | 2025-11-01 12:40:35.689300 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-11-01 12:40:35.689311 | orchestrator | Saturday 01 November 2025 12:40:20 +0000 (0:00:00.881) 0:06:32.304 ***** 2025-11-01 12:40:35.689322 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:40:35.689332 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:40:35.689343 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:40:35.689353 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:40:35.689364 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:40:35.689374 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:40:35.689385 | orchestrator | ok: [testbed-manager] 2025-11-01 12:40:35.689395 | orchestrator | 2025-11-01 12:40:35.689406 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-11-01 12:40:35.689417 | orchestrator | Saturday 01 November 2025 12:40:22 +0000 (0:00:01.169) 0:06:33.473 ***** 2025-11-01 12:40:35.689428 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:40:35.689438 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:40:35.689449 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:40:35.689459 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:40:35.689469 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:40:35.689480 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:40:35.689491 | orchestrator | ok: [testbed-manager] 2025-11-01 12:40:35.689501 | orchestrator | 2025-11-01 12:40:35.689513 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-11-01 12:40:35.689542 | orchestrator | Saturday 01 November 2025 12:40:23 +0000 (0:00:01.407) 0:06:34.881 ***** 2025-11-01 12:40:35.689554 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:40:35.689565 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:40:35.689576 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:40:35.689586 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:40:35.689597 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:40:35.689608 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:40:35.689618 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:40:35.689639 | orchestrator | 2025-11-01 12:40:35.689650 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-11-01 12:40:35.689660 | orchestrator | Saturday 01 November 2025 12:40:24 +0000 (0:00:01.310) 0:06:36.191 ***** 2025-11-01 12:40:35.689671 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:40:35.689682 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:40:35.689693 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:40:35.689704 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:40:35.689715 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:40:35.689725 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:40:35.689736 | orchestrator | ok: [testbed-manager] 2025-11-01 12:40:35.689747 | orchestrator | 2025-11-01 12:40:35.689758 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-11-01 12:40:35.689769 | orchestrator | Saturday 01 November 2025 12:40:26 +0000 (0:00:01.335) 0:06:37.527 ***** 2025-11-01 12:40:35.689779 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:40:35.689790 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:40:35.689800 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:40:35.689811 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:40:35.689822 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:40:35.689832 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:40:35.689843 | orchestrator | changed: [testbed-manager] 2025-11-01 12:40:35.689853 | orchestrator | 2025-11-01 12:40:35.689864 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-11-01 12:40:35.689875 | orchestrator | Saturday 01 November 2025 12:40:27 +0000 (0:00:01.438) 0:06:38.965 ***** 2025-11-01 12:40:35.689886 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:40:35.689897 | orchestrator | 2025-11-01 12:40:35.689907 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-11-01 12:40:35.689918 | orchestrator | Saturday 01 November 2025 12:40:28 +0000 (0:00:01.181) 0:06:40.147 ***** 2025-11-01 12:40:35.689929 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:40:35.689940 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:40:35.689950 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:40:35.689961 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:40:35.689971 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:40:35.689982 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:40:35.689992 | orchestrator | ok: [testbed-manager] 2025-11-01 12:40:35.690003 | orchestrator | 2025-11-01 12:40:35.690014 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-11-01 12:40:35.690062 | orchestrator | Saturday 01 November 2025 12:40:30 +0000 (0:00:01.508) 0:06:41.655 ***** 2025-11-01 12:40:35.690075 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:40:35.690086 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:40:35.690097 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:40:35.690107 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:40:35.690118 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:40:35.690128 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:40:35.690139 | orchestrator | ok: [testbed-manager] 2025-11-01 12:40:35.690150 | orchestrator | 2025-11-01 12:40:35.690160 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-11-01 12:40:35.690171 | orchestrator | Saturday 01 November 2025 12:40:31 +0000 (0:00:01.206) 0:06:42.861 ***** 2025-11-01 12:40:35.690182 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:40:35.690192 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:40:35.690203 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:40:35.690213 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:40:35.690224 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:40:35.690234 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:40:35.690245 | orchestrator | ok: [testbed-manager] 2025-11-01 12:40:35.690273 | orchestrator | 2025-11-01 12:40:35.690285 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-11-01 12:40:35.690308 | orchestrator | Saturday 01 November 2025 12:40:33 +0000 (0:00:01.500) 0:06:44.362 ***** 2025-11-01 12:40:35.690320 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:40:35.690330 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:40:35.690341 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:40:35.690351 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:40:35.690362 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:40:35.690372 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:40:35.690383 | orchestrator | ok: [testbed-manager] 2025-11-01 12:40:35.690394 | orchestrator | 2025-11-01 12:40:35.690404 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-11-01 12:40:35.690415 | orchestrator | Saturday 01 November 2025 12:40:34 +0000 (0:00:01.250) 0:06:45.613 ***** 2025-11-01 12:40:35.690426 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:40:35.690437 | orchestrator | 2025-11-01 12:40:35.690448 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-01 12:40:35.690458 | orchestrator | Saturday 01 November 2025 12:40:35 +0000 (0:00:01.018) 0:06:46.632 ***** 2025-11-01 12:40:35.690469 | orchestrator | 2025-11-01 12:40:35.690480 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-01 12:40:35.690490 | orchestrator | Saturday 01 November 2025 12:40:35 +0000 (0:00:00.045) 0:06:46.677 ***** 2025-11-01 12:40:35.690501 | orchestrator | 2025-11-01 12:40:35.690512 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-01 12:40:35.690522 | orchestrator | Saturday 01 November 2025 12:40:35 +0000 (0:00:00.051) 0:06:46.729 ***** 2025-11-01 12:40:35.690533 | orchestrator | 2025-11-01 12:40:35.690543 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-01 12:40:35.690562 | orchestrator | Saturday 01 November 2025 12:40:35 +0000 (0:00:00.045) 0:06:46.775 ***** 2025-11-01 12:41:03.098846 | orchestrator | 2025-11-01 12:41:03.098955 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-01 12:41:03.098970 | orchestrator | Saturday 01 November 2025 12:40:35 +0000 (0:00:00.044) 0:06:46.820 ***** 2025-11-01 12:41:03.098981 | orchestrator | 2025-11-01 12:41:03.098991 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-01 12:41:03.099001 | orchestrator | Saturday 01 November 2025 12:40:35 +0000 (0:00:00.049) 0:06:46.869 ***** 2025-11-01 12:41:03.099011 | orchestrator | 2025-11-01 12:41:03.099020 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-11-01 12:41:03.099030 | orchestrator | Saturday 01 November 2025 12:40:35 +0000 (0:00:00.060) 0:06:46.930 ***** 2025-11-01 12:41:03.099040 | orchestrator | 2025-11-01 12:41:03.099065 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-11-01 12:41:03.099075 | orchestrator | Saturday 01 November 2025 12:40:35 +0000 (0:00:00.049) 0:06:46.979 ***** 2025-11-01 12:41:03.099085 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:41:03.099096 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:41:03.099105 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:41:03.099115 | orchestrator | 2025-11-01 12:41:03.099125 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-11-01 12:41:03.099135 | orchestrator | Saturday 01 November 2025 12:40:36 +0000 (0:00:01.263) 0:06:48.242 ***** 2025-11-01 12:41:03.099145 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:41:03.099156 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:41:03.099165 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:41:03.099175 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:41:03.099185 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:41:03.099195 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:41:03.099204 | orchestrator | changed: [testbed-manager] 2025-11-01 12:41:03.099214 | orchestrator | 2025-11-01 12:41:03.099224 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2025-11-01 12:41:03.099327 | orchestrator | Saturday 01 November 2025 12:40:38 +0000 (0:00:01.666) 0:06:49.908 ***** 2025-11-01 12:41:03.099341 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:41:03.099351 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:41:03.099360 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:41:03.099370 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:41:03.099379 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:41:03.099391 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:41:03.099402 | orchestrator | changed: [testbed-manager] 2025-11-01 12:41:03.099414 | orchestrator | 2025-11-01 12:41:03.099425 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-11-01 12:41:03.099437 | orchestrator | Saturday 01 November 2025 12:40:39 +0000 (0:00:01.358) 0:06:51.267 ***** 2025-11-01 12:41:03.099449 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:41:03.099460 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:41:03.099471 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:41:03.099480 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:41:03.099490 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:41:03.099500 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:41:03.099510 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:41:03.099520 | orchestrator | 2025-11-01 12:41:03.099530 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-11-01 12:41:03.099540 | orchestrator | Saturday 01 November 2025 12:40:42 +0000 (0:00:02.380) 0:06:53.647 ***** 2025-11-01 12:41:03.099549 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:41:03.099559 | orchestrator | 2025-11-01 12:41:03.099569 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-11-01 12:41:03.099579 | orchestrator | Saturday 01 November 2025 12:40:42 +0000 (0:00:00.097) 0:06:53.745 ***** 2025-11-01 12:41:03.099588 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:41:03.099598 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:41:03.099608 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:41:03.099617 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:41:03.099627 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:41:03.099637 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:41:03.099646 | orchestrator | ok: [testbed-manager] 2025-11-01 12:41:03.099656 | orchestrator | 2025-11-01 12:41:03.099666 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-11-01 12:41:03.099676 | orchestrator | Saturday 01 November 2025 12:40:43 +0000 (0:00:01.061) 0:06:54.806 ***** 2025-11-01 12:41:03.099686 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:41:03.099695 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:41:03.099705 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:41:03.099714 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:41:03.099724 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:41:03.099734 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:41:03.099743 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:41:03.099753 | orchestrator | 2025-11-01 12:41:03.099763 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-11-01 12:41:03.099772 | orchestrator | Saturday 01 November 2025 12:40:44 +0000 (0:00:00.885) 0:06:55.691 ***** 2025-11-01 12:41:03.099783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:41:03.099795 | orchestrator | 2025-11-01 12:41:03.099805 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-11-01 12:41:03.099815 | orchestrator | Saturday 01 November 2025 12:40:45 +0000 (0:00:01.103) 0:06:56.795 ***** 2025-11-01 12:41:03.099824 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:41:03.099835 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:41:03.099844 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:41:03.099854 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:41:03.099863 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:41:03.099880 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:41:03.099890 | orchestrator | ok: [testbed-manager] 2025-11-01 12:41:03.099900 | orchestrator | 2025-11-01 12:41:03.099909 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-11-01 12:41:03.099919 | orchestrator | Saturday 01 November 2025 12:40:46 +0000 (0:00:00.926) 0:06:57.722 ***** 2025-11-01 12:41:03.099929 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-11-01 12:41:03.099954 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-11-01 12:41:03.099965 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-11-01 12:41:03.099975 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-11-01 12:41:03.099984 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-11-01 12:41:03.099994 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-11-01 12:41:03.100004 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-11-01 12:41:03.100015 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-11-01 12:41:03.100024 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-11-01 12:41:03.100039 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-11-01 12:41:03.100049 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-11-01 12:41:03.100059 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-11-01 12:41:03.100068 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-11-01 12:41:03.100078 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-11-01 12:41:03.100087 | orchestrator | 2025-11-01 12:41:03.100097 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-11-01 12:41:03.100107 | orchestrator | Saturday 01 November 2025 12:40:49 +0000 (0:00:02.672) 0:07:00.394 ***** 2025-11-01 12:41:03.100116 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:41:03.100126 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:41:03.100136 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:41:03.100145 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:41:03.100155 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:41:03.100164 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:41:03.100174 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:41:03.100183 | orchestrator | 2025-11-01 12:41:03.100193 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-11-01 12:41:03.100203 | orchestrator | Saturday 01 November 2025 12:40:49 +0000 (0:00:00.584) 0:07:00.978 ***** 2025-11-01 12:41:03.100214 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:41:03.100225 | orchestrator | 2025-11-01 12:41:03.100234 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-11-01 12:41:03.100261 | orchestrator | Saturday 01 November 2025 12:40:50 +0000 (0:00:00.948) 0:07:01.927 ***** 2025-11-01 12:41:03.100272 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:41:03.100281 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:41:03.100291 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:41:03.100301 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:41:03.100310 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:41:03.100320 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:41:03.100329 | orchestrator | ok: [testbed-manager] 2025-11-01 12:41:03.100339 | orchestrator | 2025-11-01 12:41:03.100349 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-11-01 12:41:03.100359 | orchestrator | Saturday 01 November 2025 12:40:51 +0000 (0:00:00.955) 0:07:02.882 ***** 2025-11-01 12:41:03.100368 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:41:03.100378 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:41:03.100388 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:41:03.100397 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:41:03.100414 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:41:03.100424 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:41:03.100434 | orchestrator | ok: [testbed-manager] 2025-11-01 12:41:03.100443 | orchestrator | 2025-11-01 12:41:03.100453 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-11-01 12:41:03.100462 | orchestrator | Saturday 01 November 2025 12:40:52 +0000 (0:00:01.131) 0:07:04.014 ***** 2025-11-01 12:41:03.100472 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:41:03.100482 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:41:03.100491 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:41:03.100501 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:41:03.100511 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:41:03.100520 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:41:03.100530 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:41:03.100539 | orchestrator | 2025-11-01 12:41:03.100549 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-11-01 12:41:03.100559 | orchestrator | Saturday 01 November 2025 12:40:53 +0000 (0:00:00.575) 0:07:04.590 ***** 2025-11-01 12:41:03.100569 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:41:03.100578 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:41:03.100588 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:41:03.100598 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:41:03.100607 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:41:03.100617 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:41:03.100626 | orchestrator | ok: [testbed-manager] 2025-11-01 12:41:03.100636 | orchestrator | 2025-11-01 12:41:03.100645 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-11-01 12:41:03.100655 | orchestrator | Saturday 01 November 2025 12:40:54 +0000 (0:00:01.681) 0:07:06.271 ***** 2025-11-01 12:41:03.100665 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:41:03.100674 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:41:03.100684 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:41:03.100694 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:41:03.100703 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:41:03.100713 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:41:03.100722 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:41:03.100732 | orchestrator | 2025-11-01 12:41:03.100742 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-11-01 12:41:03.100752 | orchestrator | Saturday 01 November 2025 12:40:55 +0000 (0:00:00.555) 0:07:06.826 ***** 2025-11-01 12:41:03.100761 | orchestrator | ok: [testbed-manager] 2025-11-01 12:41:03.100771 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:41:03.100780 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:41:03.100790 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:41:03.100799 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:41:03.100809 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:41:03.100824 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:41:37.575511 | orchestrator | 2025-11-01 12:41:37.575611 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-11-01 12:41:37.575629 | orchestrator | Saturday 01 November 2025 12:41:03 +0000 (0:00:07.569) 0:07:14.395 ***** 2025-11-01 12:41:37.575641 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:41:37.575654 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:41:37.575665 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:41:37.575676 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:41:37.575688 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:41:37.575699 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:41:37.575710 | orchestrator | ok: [testbed-manager] 2025-11-01 12:41:37.575722 | orchestrator | 2025-11-01 12:41:37.575734 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-11-01 12:41:37.575746 | orchestrator | Saturday 01 November 2025 12:41:04 +0000 (0:00:01.417) 0:07:15.813 ***** 2025-11-01 12:41:37.575757 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:41:37.575768 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:41:37.575804 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:41:37.575816 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:41:37.575827 | orchestrator | ok: [testbed-manager] 2025-11-01 12:41:37.575838 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:41:37.575848 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:41:37.575859 | orchestrator | 2025-11-01 12:41:37.575870 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-11-01 12:41:37.575882 | orchestrator | Saturday 01 November 2025 12:41:06 +0000 (0:00:01.729) 0:07:17.543 ***** 2025-11-01 12:41:37.575893 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:41:37.575904 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:41:37.575914 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:41:37.575925 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:41:37.575936 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:41:37.575947 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:41:37.575958 | orchestrator | ok: [testbed-manager] 2025-11-01 12:41:37.575969 | orchestrator | 2025-11-01 12:41:37.575980 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-11-01 12:41:37.575991 | orchestrator | Saturday 01 November 2025 12:41:07 +0000 (0:00:01.738) 0:07:19.282 ***** 2025-11-01 12:41:37.576003 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:41:37.576014 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:41:37.576025 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:41:37.576036 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:41:37.576047 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:41:37.576057 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:41:37.576069 | orchestrator | ok: [testbed-manager] 2025-11-01 12:41:37.576080 | orchestrator | 2025-11-01 12:41:37.576091 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-11-01 12:41:37.576102 | orchestrator | Saturday 01 November 2025 12:41:09 +0000 (0:00:01.206) 0:07:20.488 ***** 2025-11-01 12:41:37.576113 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:41:37.576124 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:41:37.576136 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:41:37.576147 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:41:37.576158 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:41:37.576169 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:41:37.576179 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:41:37.576190 | orchestrator | 2025-11-01 12:41:37.576201 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-11-01 12:41:37.576213 | orchestrator | Saturday 01 November 2025 12:41:10 +0000 (0:00:00.954) 0:07:21.443 ***** 2025-11-01 12:41:37.576224 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:41:37.576264 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:41:37.576276 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:41:37.576287 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:41:37.576298 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:41:37.576309 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:41:37.576320 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:41:37.576331 | orchestrator | 2025-11-01 12:41:37.576342 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-11-01 12:41:37.576353 | orchestrator | Saturday 01 November 2025 12:41:10 +0000 (0:00:00.595) 0:07:22.039 ***** 2025-11-01 12:41:37.576364 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:41:37.576375 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:41:37.576386 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:41:37.576397 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:41:37.576408 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:41:37.576419 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:41:37.576430 | orchestrator | ok: [testbed-manager] 2025-11-01 12:41:37.576441 | orchestrator | 2025-11-01 12:41:37.576453 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-11-01 12:41:37.576464 | orchestrator | Saturday 01 November 2025 12:41:11 +0000 (0:00:00.602) 0:07:22.641 ***** 2025-11-01 12:41:37.576485 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:41:37.576496 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:41:37.576507 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:41:37.576518 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:41:37.576529 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:41:37.576540 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:41:37.576551 | orchestrator | ok: [testbed-manager] 2025-11-01 12:41:37.576562 | orchestrator | 2025-11-01 12:41:37.576573 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-11-01 12:41:37.576584 | orchestrator | Saturday 01 November 2025 12:41:12 +0000 (0:00:00.782) 0:07:23.424 ***** 2025-11-01 12:41:37.576596 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:41:37.576607 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:41:37.576618 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:41:37.576629 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:41:37.576639 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:41:37.576650 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:41:37.576662 | orchestrator | ok: [testbed-manager] 2025-11-01 12:41:37.576673 | orchestrator | 2025-11-01 12:41:37.576684 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-11-01 12:41:37.576695 | orchestrator | Saturday 01 November 2025 12:41:12 +0000 (0:00:00.568) 0:07:23.993 ***** 2025-11-01 12:41:37.576706 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:41:37.576718 | orchestrator | ok: [testbed-manager] 2025-11-01 12:41:37.576729 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:41:37.576740 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:41:37.576751 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:41:37.576761 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:41:37.576773 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:41:37.576784 | orchestrator | 2025-11-01 12:41:37.576810 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-11-01 12:41:37.576822 | orchestrator | Saturday 01 November 2025 12:41:18 +0000 (0:00:05.635) 0:07:29.629 ***** 2025-11-01 12:41:37.576834 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:41:37.576845 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:41:37.576855 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:41:37.576866 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:41:37.576877 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:41:37.576904 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:41:37.576916 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:41:37.576927 | orchestrator | 2025-11-01 12:41:37.576938 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-11-01 12:41:37.576954 | orchestrator | Saturday 01 November 2025 12:41:18 +0000 (0:00:00.671) 0:07:30.300 ***** 2025-11-01 12:41:37.576968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:41:37.576981 | orchestrator | 2025-11-01 12:41:37.576992 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-11-01 12:41:37.577003 | orchestrator | Saturday 01 November 2025 12:41:20 +0000 (0:00:01.197) 0:07:31.497 ***** 2025-11-01 12:41:37.577014 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:41:37.577025 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:41:37.577036 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:41:37.577047 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:41:37.577058 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:41:37.577068 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:41:37.577080 | orchestrator | ok: [testbed-manager] 2025-11-01 12:41:37.577090 | orchestrator | 2025-11-01 12:41:37.577102 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-11-01 12:41:37.577112 | orchestrator | Saturday 01 November 2025 12:41:22 +0000 (0:00:02.098) 0:07:33.596 ***** 2025-11-01 12:41:37.577123 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:41:37.577134 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:41:37.577153 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:41:37.577163 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:41:37.577174 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:41:37.577185 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:41:37.577195 | orchestrator | ok: [testbed-manager] 2025-11-01 12:41:37.577206 | orchestrator | 2025-11-01 12:41:37.577217 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-11-01 12:41:37.577228 | orchestrator | Saturday 01 November 2025 12:41:23 +0000 (0:00:01.304) 0:07:34.901 ***** 2025-11-01 12:41:37.577256 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:41:37.577268 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:41:37.577278 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:41:37.577289 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:41:37.577300 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:41:37.577311 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:41:37.577322 | orchestrator | ok: [testbed-manager] 2025-11-01 12:41:37.577333 | orchestrator | 2025-11-01 12:41:37.577344 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-11-01 12:41:37.577355 | orchestrator | Saturday 01 November 2025 12:41:24 +0000 (0:00:00.911) 0:07:35.812 ***** 2025-11-01 12:41:37.577366 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-01 12:41:37.577379 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-01 12:41:37.577390 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-01 12:41:37.577401 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-01 12:41:37.577412 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-01 12:41:37.577423 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-01 12:41:37.577434 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-11-01 12:41:37.577445 | orchestrator | 2025-11-01 12:41:37.577456 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-11-01 12:41:37.577467 | orchestrator | Saturday 01 November 2025 12:41:26 +0000 (0:00:02.010) 0:07:37.823 ***** 2025-11-01 12:41:37.577479 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:41:37.577490 | orchestrator | 2025-11-01 12:41:37.577501 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-11-01 12:41:37.577512 | orchestrator | Saturday 01 November 2025 12:41:27 +0000 (0:00:00.913) 0:07:38.736 ***** 2025-11-01 12:41:37.577523 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:41:37.577534 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:41:37.577545 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:41:37.577556 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:41:37.577567 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:41:37.577578 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:41:37.577589 | orchestrator | changed: [testbed-manager] 2025-11-01 12:41:37.577600 | orchestrator | 2025-11-01 12:41:37.577617 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-11-01 12:42:12.023052 | orchestrator | Saturday 01 November 2025 12:41:37 +0000 (0:00:10.130) 0:07:48.867 ***** 2025-11-01 12:42:12.023175 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:42:12.023193 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:42:12.023859 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:42:12.023881 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:42:12.023894 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:42:12.023906 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:42:12.023920 | orchestrator | ok: [testbed-manager] 2025-11-01 12:42:12.023933 | orchestrator | 2025-11-01 12:42:12.023946 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-11-01 12:42:12.023958 | orchestrator | Saturday 01 November 2025 12:41:39 +0000 (0:00:02.183) 0:07:51.050 ***** 2025-11-01 12:42:12.023968 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:42:12.023979 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:42:12.024004 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:42:12.024015 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:42:12.024026 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:42:12.024036 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:42:12.024047 | orchestrator | 2025-11-01 12:42:12.024058 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-11-01 12:42:12.024069 | orchestrator | Saturday 01 November 2025 12:41:41 +0000 (0:00:01.275) 0:07:52.326 ***** 2025-11-01 12:42:12.024080 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:42:12.024092 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:42:12.024102 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:42:12.024113 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:42:12.024124 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:42:12.024134 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:42:12.024145 | orchestrator | changed: [testbed-manager] 2025-11-01 12:42:12.024155 | orchestrator | 2025-11-01 12:42:12.024166 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-11-01 12:42:12.024177 | orchestrator | 2025-11-01 12:42:12.024188 | orchestrator | TASK [Include hardening role] ************************************************** 2025-11-01 12:42:12.024199 | orchestrator | Saturday 01 November 2025 12:41:42 +0000 (0:00:01.621) 0:07:53.948 ***** 2025-11-01 12:42:12.024210 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:42:12.024220 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:42:12.024258 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:42:12.024269 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:42:12.024280 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:42:12.024291 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:42:12.024302 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:42:12.024312 | orchestrator | 2025-11-01 12:42:12.024323 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-11-01 12:42:12.024334 | orchestrator | 2025-11-01 12:42:12.024345 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-11-01 12:42:12.024355 | orchestrator | Saturday 01 November 2025 12:41:43 +0000 (0:00:00.591) 0:07:54.540 ***** 2025-11-01 12:42:12.024366 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:42:12.024377 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:42:12.024388 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:42:12.024399 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:42:12.024410 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:42:12.024420 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:42:12.024431 | orchestrator | changed: [testbed-manager] 2025-11-01 12:42:12.024442 | orchestrator | 2025-11-01 12:42:12.024453 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-11-01 12:42:12.024464 | orchestrator | Saturday 01 November 2025 12:41:44 +0000 (0:00:01.444) 0:07:55.984 ***** 2025-11-01 12:42:12.024475 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:42:12.024486 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:42:12.024497 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:42:12.024508 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:42:12.024518 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:42:12.024529 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:42:12.024540 | orchestrator | ok: [testbed-manager] 2025-11-01 12:42:12.024550 | orchestrator | 2025-11-01 12:42:12.024561 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-11-01 12:42:12.024589 | orchestrator | Saturday 01 November 2025 12:41:46 +0000 (0:00:01.621) 0:07:57.605 ***** 2025-11-01 12:42:12.024600 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:42:12.024611 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:42:12.024622 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:42:12.024632 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:42:12.024643 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:42:12.024655 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:42:12.024665 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:42:12.024676 | orchestrator | 2025-11-01 12:42:12.024687 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-11-01 12:42:12.024699 | orchestrator | Saturday 01 November 2025 12:41:47 +0000 (0:00:00.799) 0:07:58.405 ***** 2025-11-01 12:42:12.024710 | orchestrator | included: osism.services.smartd for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:42:12.024722 | orchestrator | 2025-11-01 12:42:12.024733 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-11-01 12:42:12.024743 | orchestrator | Saturday 01 November 2025 12:41:48 +0000 (0:00:00.948) 0:07:59.354 ***** 2025-11-01 12:42:12.024756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:42:12.024769 | orchestrator | 2025-11-01 12:42:12.024780 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-11-01 12:42:12.024791 | orchestrator | Saturday 01 November 2025 12:41:48 +0000 (0:00:00.911) 0:08:00.265 ***** 2025-11-01 12:42:12.024802 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:42:12.024813 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:42:12.024824 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:42:12.024835 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:42:12.024845 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:42:12.024856 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:42:12.024867 | orchestrator | changed: [testbed-manager] 2025-11-01 12:42:12.024878 | orchestrator | 2025-11-01 12:42:12.024909 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-11-01 12:42:12.024921 | orchestrator | Saturday 01 November 2025 12:41:59 +0000 (0:00:10.653) 0:08:10.918 ***** 2025-11-01 12:42:12.024932 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:42:12.024943 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:42:12.024953 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:42:12.024964 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:42:12.024975 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:42:12.024985 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:42:12.024996 | orchestrator | changed: [testbed-manager] 2025-11-01 12:42:12.025007 | orchestrator | 2025-11-01 12:42:12.025018 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-11-01 12:42:12.025034 | orchestrator | Saturday 01 November 2025 12:42:00 +0000 (0:00:00.966) 0:08:11.885 ***** 2025-11-01 12:42:12.025045 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:42:12.025056 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:42:12.025067 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:42:12.025077 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:42:12.025088 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:42:12.025099 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:42:12.025109 | orchestrator | changed: [testbed-manager] 2025-11-01 12:42:12.025120 | orchestrator | 2025-11-01 12:42:12.025131 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-11-01 12:42:12.025142 | orchestrator | Saturday 01 November 2025 12:42:01 +0000 (0:00:01.353) 0:08:13.238 ***** 2025-11-01 12:42:12.025153 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:42:12.025171 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:42:12.025181 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:42:12.025192 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:42:12.025203 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:42:12.025213 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:42:12.025241 | orchestrator | changed: [testbed-manager] 2025-11-01 12:42:12.025252 | orchestrator | 2025-11-01 12:42:12.025263 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-11-01 12:42:12.025274 | orchestrator | Saturday 01 November 2025 12:42:04 +0000 (0:00:02.118) 0:08:15.356 ***** 2025-11-01 12:42:12.025285 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:42:12.025296 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:42:12.025307 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:42:12.025317 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:42:12.025328 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:42:12.025339 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:42:12.025350 | orchestrator | changed: [testbed-manager] 2025-11-01 12:42:12.025360 | orchestrator | 2025-11-01 12:42:12.025371 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-11-01 12:42:12.025382 | orchestrator | Saturday 01 November 2025 12:42:05 +0000 (0:00:01.319) 0:08:16.676 ***** 2025-11-01 12:42:12.025393 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:42:12.025404 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:42:12.025415 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:42:12.025425 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:42:12.025436 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:42:12.025447 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:42:12.025458 | orchestrator | changed: [testbed-manager] 2025-11-01 12:42:12.025468 | orchestrator | 2025-11-01 12:42:12.025479 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-11-01 12:42:12.025490 | orchestrator | 2025-11-01 12:42:12.025501 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-11-01 12:42:12.025512 | orchestrator | Saturday 01 November 2025 12:42:06 +0000 (0:00:01.247) 0:08:17.923 ***** 2025-11-01 12:42:12.025523 | orchestrator | included: osism.commons.state for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:42:12.025534 | orchestrator | 2025-11-01 12:42:12.025545 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-11-01 12:42:12.025556 | orchestrator | Saturday 01 November 2025 12:42:07 +0000 (0:00:01.116) 0:08:19.040 ***** 2025-11-01 12:42:12.025567 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:42:12.025578 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:42:12.025589 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:42:12.025600 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:42:12.025611 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:42:12.025622 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:42:12.025633 | orchestrator | ok: [testbed-manager] 2025-11-01 12:42:12.025643 | orchestrator | 2025-11-01 12:42:12.025654 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-11-01 12:42:12.025665 | orchestrator | Saturday 01 November 2025 12:42:08 +0000 (0:00:00.915) 0:08:19.956 ***** 2025-11-01 12:42:12.025676 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:42:12.025687 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:42:12.025698 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:42:12.025709 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:42:12.025720 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:42:12.025730 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:42:12.025741 | orchestrator | changed: [testbed-manager] 2025-11-01 12:42:12.025752 | orchestrator | 2025-11-01 12:42:12.025763 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-11-01 12:42:12.025774 | orchestrator | Saturday 01 November 2025 12:42:09 +0000 (0:00:01.319) 0:08:21.275 ***** 2025-11-01 12:42:12.025785 | orchestrator | included: osism.commons.state for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-11-01 12:42:12.025803 | orchestrator | 2025-11-01 12:42:12.025814 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-11-01 12:42:12.025825 | orchestrator | Saturday 01 November 2025 12:42:11 +0000 (0:00:01.162) 0:08:22.437 ***** 2025-11-01 12:42:12.025836 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:42:12.025846 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:42:12.025857 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:42:12.025868 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:42:12.025879 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:42:12.025890 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:42:12.025900 | orchestrator | ok: [testbed-manager] 2025-11-01 12:42:12.025911 | orchestrator | 2025-11-01 12:42:12.025929 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-11-01 12:42:13.885599 | orchestrator | Saturday 01 November 2025 12:42:12 +0000 (0:00:00.882) 0:08:23.320 ***** 2025-11-01 12:42:13.885682 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:42:13.885696 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:42:13.885708 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:42:13.885719 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:42:13.885729 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:42:13.885740 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:42:13.885751 | orchestrator | changed: [testbed-manager] 2025-11-01 12:42:13.885762 | orchestrator | 2025-11-01 12:42:13.885774 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:42:13.885800 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2025-11-01 12:42:13.885813 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-11-01 12:42:13.885824 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-11-01 12:42:13.885835 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-11-01 12:42:13.885846 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-11-01 12:42:13.885856 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-11-01 12:42:13.885867 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-11-01 12:42:13.885878 | orchestrator | 2025-11-01 12:42:13.885889 | orchestrator | 2025-11-01 12:42:13.885900 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:42:13.885911 | orchestrator | Saturday 01 November 2025 12:42:13 +0000 (0:00:01.276) 0:08:24.596 ***** 2025-11-01 12:42:13.885922 | orchestrator | =============================================================================== 2025-11-01 12:42:13.885933 | orchestrator | osism.commons.packages : Install required packages --------------------- 81.49s 2025-11-01 12:42:13.885943 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 37.13s 2025-11-01 12:42:13.885954 | orchestrator | osism.commons.packages : Download required packages -------------------- 20.52s 2025-11-01 12:42:13.885966 | orchestrator | osism.commons.repository : Update package cache ------------------------ 19.54s 2025-11-01 12:42:13.885976 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.80s 2025-11-01 12:42:13.885987 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.39s 2025-11-01 12:42:13.885999 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.27s 2025-11-01 12:42:13.886106 | orchestrator | osism.services.smartd : Install smartmontools package ------------------ 10.65s 2025-11-01 12:42:13.886120 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.13s 2025-11-01 12:42:13.886131 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.94s 2025-11-01 12:42:13.886142 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.54s 2025-11-01 12:42:13.886153 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.83s 2025-11-01 12:42:13.886165 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.82s 2025-11-01 12:42:13.886177 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.53s 2025-11-01 12:42:13.886189 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.11s 2025-11-01 12:42:13.886201 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.85s 2025-11-01 12:42:13.886213 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.57s 2025-11-01 12:42:13.886252 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.87s 2025-11-01 12:42:13.886266 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.20s 2025-11-01 12:42:13.886278 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.89s 2025-11-01 12:42:14.278567 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-11-01 12:42:14.278653 | orchestrator | + osism apply network 2025-11-01 12:42:27.449598 | orchestrator | 2025-11-01 12:42:27 | INFO  | Task 2f416ddb-307b-4ed1-a68f-62088ff0ebfd (network) was prepared for execution. 2025-11-01 12:42:27.449700 | orchestrator | 2025-11-01 12:42:27 | INFO  | It takes a moment until task 2f416ddb-307b-4ed1-a68f-62088ff0ebfd (network) has been started and output is visible here. 2025-11-01 12:42:59.189317 | orchestrator | 2025-11-01 12:42:59.189429 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-11-01 12:42:59.189447 | orchestrator | 2025-11-01 12:42:59.189459 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-11-01 12:42:59.189472 | orchestrator | Saturday 01 November 2025 12:42:32 +0000 (0:00:00.309) 0:00:00.309 ***** 2025-11-01 12:42:59.189483 | orchestrator | ok: [testbed-manager] 2025-11-01 12:42:59.189495 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:42:59.189506 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:42:59.189517 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:42:59.189528 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:42:59.189539 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:42:59.189550 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:42:59.189561 | orchestrator | 2025-11-01 12:42:59.189572 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-11-01 12:42:59.189583 | orchestrator | Saturday 01 November 2025 12:42:33 +0000 (0:00:00.816) 0:00:01.126 ***** 2025-11-01 12:42:59.189613 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 12:42:59.189627 | orchestrator | 2025-11-01 12:42:59.189638 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-11-01 12:42:59.189649 | orchestrator | Saturday 01 November 2025 12:42:34 +0000 (0:00:01.325) 0:00:02.451 ***** 2025-11-01 12:42:59.189660 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:42:59.189670 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:42:59.189681 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:42:59.189692 | orchestrator | ok: [testbed-manager] 2025-11-01 12:42:59.189703 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:42:59.189714 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:42:59.189724 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:42:59.189735 | orchestrator | 2025-11-01 12:42:59.189746 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-11-01 12:42:59.189778 | orchestrator | Saturday 01 November 2025 12:42:36 +0000 (0:00:02.025) 0:00:04.476 ***** 2025-11-01 12:42:59.189790 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:42:59.189800 | orchestrator | ok: [testbed-manager] 2025-11-01 12:42:59.189812 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:42:59.189825 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:42:59.189837 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:42:59.189850 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:42:59.189862 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:42:59.189874 | orchestrator | 2025-11-01 12:42:59.189886 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-11-01 12:42:59.189898 | orchestrator | Saturday 01 November 2025 12:42:38 +0000 (0:00:01.813) 0:00:06.290 ***** 2025-11-01 12:42:59.189911 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-11-01 12:42:59.189924 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-11-01 12:42:59.189937 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-11-01 12:42:59.189949 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-11-01 12:42:59.189961 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-11-01 12:42:59.189974 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-11-01 12:42:59.189987 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-11-01 12:42:59.189999 | orchestrator | 2025-11-01 12:42:59.190011 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-11-01 12:42:59.190074 | orchestrator | Saturday 01 November 2025 12:42:39 +0000 (0:00:01.078) 0:00:07.368 ***** 2025-11-01 12:42:59.190087 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 12:42:59.190099 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-11-01 12:42:59.190110 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 12:42:59.190121 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-11-01 12:42:59.190132 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-01 12:42:59.190143 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-01 12:42:59.190154 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-01 12:42:59.190165 | orchestrator | 2025-11-01 12:42:59.190176 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-11-01 12:42:59.190188 | orchestrator | Saturday 01 November 2025 12:42:43 +0000 (0:00:03.956) 0:00:11.325 ***** 2025-11-01 12:42:59.190199 | orchestrator | changed: [testbed-manager] 2025-11-01 12:42:59.190210 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:42:59.190239 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:42:59.190250 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:42:59.190261 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:42:59.190272 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:42:59.190284 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:42:59.190295 | orchestrator | 2025-11-01 12:42:59.190306 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-11-01 12:42:59.190318 | orchestrator | Saturday 01 November 2025 12:42:45 +0000 (0:00:01.733) 0:00:13.058 ***** 2025-11-01 12:42:59.190329 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-11-01 12:42:59.190340 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 12:42:59.190352 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 12:42:59.190363 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-01 12:42:59.190374 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-11-01 12:42:59.190385 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-01 12:42:59.190396 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-01 12:42:59.190407 | orchestrator | 2025-11-01 12:42:59.190419 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-11-01 12:42:59.190430 | orchestrator | Saturday 01 November 2025 12:42:47 +0000 (0:00:01.935) 0:00:14.994 ***** 2025-11-01 12:42:59.190441 | orchestrator | ok: [testbed-manager] 2025-11-01 12:42:59.190453 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:42:59.190464 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:42:59.190484 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:42:59.190495 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:42:59.190505 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:42:59.190515 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:42:59.190526 | orchestrator | 2025-11-01 12:42:59.190536 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-11-01 12:42:59.190565 | orchestrator | Saturday 01 November 2025 12:42:48 +0000 (0:00:01.283) 0:00:16.277 ***** 2025-11-01 12:42:59.190578 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:42:59.190589 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:42:59.190600 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:42:59.190611 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:42:59.190622 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:42:59.190633 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:42:59.190645 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:42:59.190656 | orchestrator | 2025-11-01 12:42:59.190667 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-11-01 12:42:59.190678 | orchestrator | Saturday 01 November 2025 12:42:49 +0000 (0:00:00.721) 0:00:16.998 ***** 2025-11-01 12:42:59.190690 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:42:59.190701 | orchestrator | ok: [testbed-manager] 2025-11-01 12:42:59.190712 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:42:59.190723 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:42:59.190734 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:42:59.190745 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:42:59.190756 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:42:59.190767 | orchestrator | 2025-11-01 12:42:59.190778 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-11-01 12:42:59.190790 | orchestrator | Saturday 01 November 2025 12:42:51 +0000 (0:00:02.276) 0:00:19.275 ***** 2025-11-01 12:42:59.190802 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:42:59.190813 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:42:59.190824 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:42:59.190835 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:42:59.190846 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:42:59.190857 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:42:59.190869 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-11-01 12:42:59.190881 | orchestrator | 2025-11-01 12:42:59.190893 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-11-01 12:42:59.190904 | orchestrator | Saturday 01 November 2025 12:42:52 +0000 (0:00:00.963) 0:00:20.239 ***** 2025-11-01 12:42:59.190915 | orchestrator | ok: [testbed-manager] 2025-11-01 12:42:59.190926 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:42:59.190937 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:42:59.190948 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:42:59.190959 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:42:59.190970 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:42:59.190981 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:42:59.190992 | orchestrator | 2025-11-01 12:42:59.191004 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-11-01 12:42:59.191015 | orchestrator | Saturday 01 November 2025 12:42:54 +0000 (0:00:01.772) 0:00:22.011 ***** 2025-11-01 12:42:59.191027 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 12:42:59.191040 | orchestrator | 2025-11-01 12:42:59.191051 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-11-01 12:42:59.191063 | orchestrator | Saturday 01 November 2025 12:42:55 +0000 (0:00:01.454) 0:00:23.465 ***** 2025-11-01 12:42:59.191074 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:42:59.191085 | orchestrator | ok: [testbed-manager] 2025-11-01 12:42:59.191097 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:42:59.191114 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:42:59.191125 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:42:59.191137 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:42:59.191148 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:42:59.191159 | orchestrator | 2025-11-01 12:42:59.191170 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-11-01 12:42:59.191182 | orchestrator | Saturday 01 November 2025 12:42:56 +0000 (0:00:01.038) 0:00:24.503 ***** 2025-11-01 12:42:59.191193 | orchestrator | ok: [testbed-manager] 2025-11-01 12:42:59.191204 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:42:59.191228 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:42:59.191241 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:42:59.191253 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:42:59.191264 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:42:59.191276 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:42:59.191288 | orchestrator | 2025-11-01 12:42:59.191300 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-11-01 12:42:59.191312 | orchestrator | Saturday 01 November 2025 12:42:57 +0000 (0:00:00.932) 0:00:25.435 ***** 2025-11-01 12:42:59.191324 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-11-01 12:42:59.191335 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-11-01 12:42:59.191347 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-11-01 12:42:59.191359 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-11-01 12:42:59.191371 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-01 12:42:59.191382 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-11-01 12:42:59.191394 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-01 12:42:59.191406 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-11-01 12:42:59.191417 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-01 12:42:59.191429 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-01 12:42:59.191449 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-01 12:42:59.191461 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-01 12:42:59.191473 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-11-01 12:42:59.191484 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-11-01 12:42:59.191496 | orchestrator | 2025-11-01 12:42:59.191515 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-11-01 12:43:18.001182 | orchestrator | Saturday 01 November 2025 12:42:59 +0000 (0:00:01.320) 0:00:26.756 ***** 2025-11-01 12:43:18.001339 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:43:18.001357 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:43:18.001368 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:43:18.001379 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:43:18.001390 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:43:18.001401 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:43:18.001412 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:43:18.001423 | orchestrator | 2025-11-01 12:43:18.001436 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-11-01 12:43:18.001448 | orchestrator | Saturday 01 November 2025 12:42:59 +0000 (0:00:00.663) 0:00:27.419 ***** 2025-11-01 12:43:18.001461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-3, testbed-node-1, testbed-node-5, testbed-node-2, testbed-node-4 2025-11-01 12:43:18.001475 | orchestrator | 2025-11-01 12:43:18.001504 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-11-01 12:43:18.001516 | orchestrator | Saturday 01 November 2025 12:43:04 +0000 (0:00:05.093) 0:00:32.513 ***** 2025-11-01 12:43:18.001549 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-11-01 12:43:18.001564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-11-01 12:43:18.001577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-11-01 12:43:18.001588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-11-01 12:43:18.001599 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-11-01 12:43:18.001610 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-11-01 12:43:18.001622 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-11-01 12:43:18.001633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-11-01 12:43:18.001650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-11-01 12:43:18.001661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-11-01 12:43:18.001673 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-11-01 12:43:18.001703 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-11-01 12:43:18.001716 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-11-01 12:43:18.001729 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-11-01 12:43:18.001750 | orchestrator | 2025-11-01 12:43:18.001762 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-11-01 12:43:18.001775 | orchestrator | Saturday 01 November 2025 12:43:11 +0000 (0:00:06.647) 0:00:39.161 ***** 2025-11-01 12:43:18.001793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-11-01 12:43:18.001807 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-11-01 12:43:18.001821 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-11-01 12:43:18.001833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-11-01 12:43:18.001846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-11-01 12:43:18.001857 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-11-01 12:43:18.001868 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-11-01 12:43:18.001880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-11-01 12:43:18.001891 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-11-01 12:43:18.001902 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-11-01 12:43:18.001913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-11-01 12:43:18.001924 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-11-01 12:43:18.001943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-11-01 12:43:24.905004 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-11-01 12:43:24.905116 | orchestrator | 2025-11-01 12:43:24.905133 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-11-01 12:43:24.905146 | orchestrator | Saturday 01 November 2025 12:43:17 +0000 (0:00:06.403) 0:00:45.564 ***** 2025-11-01 12:43:24.905174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 12:43:24.905187 | orchestrator | 2025-11-01 12:43:24.905198 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-11-01 12:43:24.905209 | orchestrator | Saturday 01 November 2025 12:43:19 +0000 (0:00:01.401) 0:00:46.966 ***** 2025-11-01 12:43:24.905269 | orchestrator | ok: [testbed-manager] 2025-11-01 12:43:24.905282 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:43:24.905292 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:43:24.905303 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:43:24.905314 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:43:24.905324 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:43:24.905335 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:43:24.905346 | orchestrator | 2025-11-01 12:43:24.905357 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-11-01 12:43:24.905368 | orchestrator | Saturday 01 November 2025 12:43:20 +0000 (0:00:01.247) 0:00:48.214 ***** 2025-11-01 12:43:24.905379 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-01 12:43:24.905391 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-01 12:43:24.905402 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-01 12:43:24.905412 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-01 12:43:24.905423 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:43:24.905435 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-01 12:43:24.905445 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-01 12:43:24.905456 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-01 12:43:24.905467 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-01 12:43:24.905478 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:43:24.905489 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-01 12:43:24.905499 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-01 12:43:24.905510 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-01 12:43:24.905521 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-01 12:43:24.905532 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:43:24.905545 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-01 12:43:24.905558 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-01 12:43:24.905570 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-01 12:43:24.905583 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-01 12:43:24.905595 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:43:24.905608 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-01 12:43:24.905640 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-01 12:43:24.905653 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-01 12:43:24.905665 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-01 12:43:24.905678 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-01 12:43:24.905690 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-01 12:43:24.905702 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-01 12:43:24.905715 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-01 12:43:24.905727 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:43:24.905740 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:43:24.905753 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-11-01 12:43:24.905765 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-11-01 12:43:24.905777 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-11-01 12:43:24.905789 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-11-01 12:43:24.905802 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:43:24.905814 | orchestrator | 2025-11-01 12:43:24.905826 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-11-01 12:43:24.905854 | orchestrator | Saturday 01 November 2025 12:43:22 +0000 (0:00:02.283) 0:00:50.498 ***** 2025-11-01 12:43:24.905867 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:43:24.905880 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:43:24.905891 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:43:24.905902 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:43:24.905913 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:43:24.905923 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:43:24.905934 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:43:24.905944 | orchestrator | 2025-11-01 12:43:24.905955 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-11-01 12:43:24.905966 | orchestrator | Saturday 01 November 2025 12:43:23 +0000 (0:00:00.707) 0:00:51.205 ***** 2025-11-01 12:43:24.905977 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:43:24.905987 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:43:24.905998 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:43:24.906009 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:43:24.906075 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:43:24.906096 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:43:24.906107 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:43:24.906118 | orchestrator | 2025-11-01 12:43:24.906128 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:43:24.906140 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 12:43:24.906153 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 12:43:24.906164 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 12:43:24.906175 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 12:43:24.906186 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 12:43:24.906205 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 12:43:24.906238 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 12:43:24.906249 | orchestrator | 2025-11-01 12:43:24.906260 | orchestrator | 2025-11-01 12:43:24.906271 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:43:24.906282 | orchestrator | Saturday 01 November 2025 12:43:24 +0000 (0:00:00.825) 0:00:52.030 ***** 2025-11-01 12:43:24.906293 | orchestrator | =============================================================================== 2025-11-01 12:43:24.906303 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.65s 2025-11-01 12:43:24.906314 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.40s 2025-11-01 12:43:24.906325 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 5.09s 2025-11-01 12:43:24.906336 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.96s 2025-11-01 12:43:24.906346 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.28s 2025-11-01 12:43:24.906357 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.28s 2025-11-01 12:43:24.906367 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.03s 2025-11-01 12:43:24.906378 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.94s 2025-11-01 12:43:24.906389 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.81s 2025-11-01 12:43:24.906400 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.77s 2025-11-01 12:43:24.906410 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.73s 2025-11-01 12:43:24.906421 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.45s 2025-11-01 12:43:24.906431 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.40s 2025-11-01 12:43:24.906442 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.33s 2025-11-01 12:43:24.906453 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.32s 2025-11-01 12:43:24.906464 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.28s 2025-11-01 12:43:24.906475 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.25s 2025-11-01 12:43:24.906485 | orchestrator | osism.commons.network : Create required directories --------------------- 1.08s 2025-11-01 12:43:24.906496 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.04s 2025-11-01 12:43:24.906507 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.96s 2025-11-01 12:43:25.289963 | orchestrator | + osism apply wireguard 2025-11-01 12:43:37.668913 | orchestrator | 2025-11-01 12:43:37 | INFO  | Task c7633533-81f5-49e8-bf9b-8a6d69fe37df (wireguard) was prepared for execution. 2025-11-01 12:43:37.669025 | orchestrator | 2025-11-01 12:43:37 | INFO  | It takes a moment until task c7633533-81f5-49e8-bf9b-8a6d69fe37df (wireguard) has been started and output is visible here. 2025-11-01 12:44:00.335320 | orchestrator | 2025-11-01 12:44:00.335423 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-11-01 12:44:00.335439 | orchestrator | 2025-11-01 12:44:00.335451 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-11-01 12:44:00.335462 | orchestrator | Saturday 01 November 2025 12:43:42 +0000 (0:00:00.253) 0:00:00.253 ***** 2025-11-01 12:44:00.335473 | orchestrator | ok: [testbed-manager] 2025-11-01 12:44:00.335486 | orchestrator | 2025-11-01 12:44:00.335497 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-11-01 12:44:00.335508 | orchestrator | Saturday 01 November 2025 12:43:44 +0000 (0:00:01.751) 0:00:02.005 ***** 2025-11-01 12:44:00.335542 | orchestrator | changed: [testbed-manager] 2025-11-01 12:44:00.335554 | orchestrator | 2025-11-01 12:44:00.335565 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-11-01 12:44:00.335576 | orchestrator | Saturday 01 November 2025 12:43:52 +0000 (0:00:07.724) 0:00:09.729 ***** 2025-11-01 12:44:00.335603 | orchestrator | changed: [testbed-manager] 2025-11-01 12:44:00.335614 | orchestrator | 2025-11-01 12:44:00.335625 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-11-01 12:44:00.335636 | orchestrator | Saturday 01 November 2025 12:43:52 +0000 (0:00:00.619) 0:00:10.349 ***** 2025-11-01 12:44:00.335646 | orchestrator | changed: [testbed-manager] 2025-11-01 12:44:00.335657 | orchestrator | 2025-11-01 12:44:00.335667 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-11-01 12:44:00.335678 | orchestrator | Saturday 01 November 2025 12:43:53 +0000 (0:00:00.435) 0:00:10.784 ***** 2025-11-01 12:44:00.335689 | orchestrator | ok: [testbed-manager] 2025-11-01 12:44:00.335701 | orchestrator | 2025-11-01 12:44:00.335712 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-11-01 12:44:00.335722 | orchestrator | Saturday 01 November 2025 12:43:53 +0000 (0:00:00.729) 0:00:11.514 ***** 2025-11-01 12:44:00.335733 | orchestrator | ok: [testbed-manager] 2025-11-01 12:44:00.335744 | orchestrator | 2025-11-01 12:44:00.335754 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-11-01 12:44:00.335765 | orchestrator | Saturday 01 November 2025 12:43:54 +0000 (0:00:00.413) 0:00:11.927 ***** 2025-11-01 12:44:00.335776 | orchestrator | ok: [testbed-manager] 2025-11-01 12:44:00.335786 | orchestrator | 2025-11-01 12:44:00.335797 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-11-01 12:44:00.335808 | orchestrator | Saturday 01 November 2025 12:43:54 +0000 (0:00:00.453) 0:00:12.381 ***** 2025-11-01 12:44:00.335819 | orchestrator | changed: [testbed-manager] 2025-11-01 12:44:00.335829 | orchestrator | 2025-11-01 12:44:00.335840 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-11-01 12:44:00.335851 | orchestrator | Saturday 01 November 2025 12:43:56 +0000 (0:00:01.265) 0:00:13.646 ***** 2025-11-01 12:44:00.335861 | orchestrator | changed: [testbed-manager] => (item=None) 2025-11-01 12:44:00.335872 | orchestrator | changed: [testbed-manager] 2025-11-01 12:44:00.335883 | orchestrator | 2025-11-01 12:44:00.335894 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-11-01 12:44:00.335905 | orchestrator | Saturday 01 November 2025 12:43:57 +0000 (0:00:00.959) 0:00:14.606 ***** 2025-11-01 12:44:00.335916 | orchestrator | changed: [testbed-manager] 2025-11-01 12:44:00.335926 | orchestrator | 2025-11-01 12:44:00.335937 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-11-01 12:44:00.335948 | orchestrator | Saturday 01 November 2025 12:43:58 +0000 (0:00:01.820) 0:00:16.427 ***** 2025-11-01 12:44:00.335959 | orchestrator | changed: [testbed-manager] 2025-11-01 12:44:00.335969 | orchestrator | 2025-11-01 12:44:00.335980 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:44:00.335991 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:44:00.336002 | orchestrator | 2025-11-01 12:44:00.336013 | orchestrator | 2025-11-01 12:44:00.336024 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:44:00.336035 | orchestrator | Saturday 01 November 2025 12:43:59 +0000 (0:00:01.036) 0:00:17.463 ***** 2025-11-01 12:44:00.336045 | orchestrator | =============================================================================== 2025-11-01 12:44:00.336056 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.72s 2025-11-01 12:44:00.336067 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.82s 2025-11-01 12:44:00.336077 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.75s 2025-11-01 12:44:00.336088 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.27s 2025-11-01 12:44:00.336106 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.04s 2025-11-01 12:44:00.336117 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.96s 2025-11-01 12:44:00.336128 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.73s 2025-11-01 12:44:00.336138 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.62s 2025-11-01 12:44:00.336149 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.45s 2025-11-01 12:44:00.336160 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2025-11-01 12:44:00.336170 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.41s 2025-11-01 12:44:00.716358 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-11-01 12:44:00.758730 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-11-01 12:44:00.758768 | orchestrator | Dload Upload Total Spent Left Speed 2025-11-01 12:44:00.839466 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 185 0 --:--:-- --:--:-- --:--:-- 187 2025-11-01 12:44:00.853516 | orchestrator | + osism apply --environment custom workarounds 2025-11-01 12:44:03.012445 | orchestrator | 2025-11-01 12:44:03 | INFO  | Trying to run play workarounds in environment custom 2025-11-01 12:44:13.159880 | orchestrator | 2025-11-01 12:44:13 | INFO  | Task cbcab65a-70c4-496e-b180-de8f6b131888 (workarounds) was prepared for execution. 2025-11-01 12:44:13.159979 | orchestrator | 2025-11-01 12:44:13 | INFO  | It takes a moment until task cbcab65a-70c4-496e-b180-de8f6b131888 (workarounds) has been started and output is visible here. 2025-11-01 12:44:40.986365 | orchestrator | 2025-11-01 12:44:40.986465 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 12:44:40.986482 | orchestrator | 2025-11-01 12:44:40.986495 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-11-01 12:44:40.986516 | orchestrator | Saturday 01 November 2025 12:44:18 +0000 (0:00:00.159) 0:00:00.159 ***** 2025-11-01 12:44:40.986528 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-11-01 12:44:40.986539 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-11-01 12:44:40.986550 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-11-01 12:44:40.986560 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-11-01 12:44:40.986571 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-11-01 12:44:40.986582 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-11-01 12:44:40.986592 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-11-01 12:44:40.986603 | orchestrator | 2025-11-01 12:44:40.986614 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-11-01 12:44:40.986624 | orchestrator | 2025-11-01 12:44:40.986635 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-11-01 12:44:40.986645 | orchestrator | Saturday 01 November 2025 12:44:19 +0000 (0:00:00.924) 0:00:01.084 ***** 2025-11-01 12:44:40.986656 | orchestrator | ok: [testbed-manager] 2025-11-01 12:44:40.986668 | orchestrator | 2025-11-01 12:44:40.986679 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-11-01 12:44:40.986689 | orchestrator | 2025-11-01 12:44:40.986700 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-11-01 12:44:40.986711 | orchestrator | Saturday 01 November 2025 12:44:21 +0000 (0:00:02.592) 0:00:03.676 ***** 2025-11-01 12:44:40.986721 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:44:40.986732 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:44:40.986743 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:44:40.986753 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:44:40.986783 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:44:40.986794 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:44:40.986804 | orchestrator | 2025-11-01 12:44:40.986815 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-11-01 12:44:40.986825 | orchestrator | 2025-11-01 12:44:40.986837 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-11-01 12:44:40.986848 | orchestrator | Saturday 01 November 2025 12:44:23 +0000 (0:00:02.013) 0:00:05.690 ***** 2025-11-01 12:44:40.986859 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-01 12:44:40.986871 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-01 12:44:40.986882 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-01 12:44:40.986893 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-01 12:44:40.986904 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-01 12:44:40.986916 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-11-01 12:44:40.986928 | orchestrator | 2025-11-01 12:44:40.986940 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-11-01 12:44:40.986953 | orchestrator | Saturday 01 November 2025 12:44:25 +0000 (0:00:01.624) 0:00:07.315 ***** 2025-11-01 12:44:40.986966 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:44:40.986978 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:44:40.986990 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:44:40.987002 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:44:40.987014 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:44:40.987025 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:44:40.987037 | orchestrator | 2025-11-01 12:44:40.987049 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-11-01 12:44:40.987061 | orchestrator | Saturday 01 November 2025 12:44:29 +0000 (0:00:03.947) 0:00:11.262 ***** 2025-11-01 12:44:40.987074 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:44:40.987086 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:44:40.987097 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:44:40.987110 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:44:40.987121 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:44:40.987134 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:44:40.987145 | orchestrator | 2025-11-01 12:44:40.987157 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-11-01 12:44:40.987169 | orchestrator | 2025-11-01 12:44:40.987182 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-11-01 12:44:40.987195 | orchestrator | Saturday 01 November 2025 12:44:29 +0000 (0:00:00.766) 0:00:12.028 ***** 2025-11-01 12:44:40.987230 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:44:40.987242 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:44:40.987254 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:44:40.987266 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:44:40.987277 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:44:40.987287 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:44:40.987298 | orchestrator | changed: [testbed-manager] 2025-11-01 12:44:40.987308 | orchestrator | 2025-11-01 12:44:40.987319 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-11-01 12:44:40.987330 | orchestrator | Saturday 01 November 2025 12:44:31 +0000 (0:00:01.718) 0:00:13.747 ***** 2025-11-01 12:44:40.987340 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:44:40.987351 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:44:40.987362 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:44:40.987372 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:44:40.987383 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:44:40.987400 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:44:40.987427 | orchestrator | changed: [testbed-manager] 2025-11-01 12:44:40.987439 | orchestrator | 2025-11-01 12:44:40.987450 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-11-01 12:44:40.987461 | orchestrator | Saturday 01 November 2025 12:44:33 +0000 (0:00:01.752) 0:00:15.500 ***** 2025-11-01 12:44:40.987476 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:44:40.987487 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:44:40.987498 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:44:40.987509 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:44:40.987520 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:44:40.987530 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:44:40.987541 | orchestrator | ok: [testbed-manager] 2025-11-01 12:44:40.987552 | orchestrator | 2025-11-01 12:44:40.987562 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-11-01 12:44:40.987573 | orchestrator | Saturday 01 November 2025 12:44:35 +0000 (0:00:01.719) 0:00:17.220 ***** 2025-11-01 12:44:40.987584 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:44:40.987595 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:44:40.987605 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:44:40.987616 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:44:40.987627 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:44:40.987637 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:44:40.987648 | orchestrator | changed: [testbed-manager] 2025-11-01 12:44:40.987658 | orchestrator | 2025-11-01 12:44:40.987669 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-11-01 12:44:40.987680 | orchestrator | Saturday 01 November 2025 12:44:37 +0000 (0:00:01.970) 0:00:19.190 ***** 2025-11-01 12:44:40.987691 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:44:40.987708 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:44:40.987720 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:44:40.987730 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:44:40.987741 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:44:40.987751 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:44:40.987762 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:44:40.987773 | orchestrator | 2025-11-01 12:44:40.987783 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-11-01 12:44:40.987794 | orchestrator | 2025-11-01 12:44:40.987805 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-11-01 12:44:40.987815 | orchestrator | Saturday 01 November 2025 12:44:37 +0000 (0:00:00.748) 0:00:19.938 ***** 2025-11-01 12:44:40.987826 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:44:40.987837 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:44:40.987847 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:44:40.987858 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:44:40.987868 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:44:40.987879 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:44:40.987890 | orchestrator | ok: [testbed-manager] 2025-11-01 12:44:40.987900 | orchestrator | 2025-11-01 12:44:40.987911 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:44:40.987923 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 12:44:40.987935 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:44:40.987946 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:44:40.987957 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:44:40.987968 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:44:40.987984 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:44:40.987995 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:44:40.988006 | orchestrator | 2025-11-01 12:44:40.988017 | orchestrator | 2025-11-01 12:44:40.988027 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:44:40.988038 | orchestrator | Saturday 01 November 2025 12:44:40 +0000 (0:00:03.076) 0:00:23.014 ***** 2025-11-01 12:44:40.988049 | orchestrator | =============================================================================== 2025-11-01 12:44:40.988060 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.95s 2025-11-01 12:44:40.988070 | orchestrator | Install python3-docker -------------------------------------------------- 3.08s 2025-11-01 12:44:40.988081 | orchestrator | Apply netplan configuration --------------------------------------------- 2.59s 2025-11-01 12:44:40.988091 | orchestrator | Apply netplan configuration --------------------------------------------- 2.01s 2025-11-01 12:44:40.988102 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.97s 2025-11-01 12:44:40.988113 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.75s 2025-11-01 12:44:40.988123 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.72s 2025-11-01 12:44:40.988134 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.72s 2025-11-01 12:44:40.988144 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.62s 2025-11-01 12:44:40.988155 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.92s 2025-11-01 12:44:40.988166 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.77s 2025-11-01 12:44:40.988183 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.75s 2025-11-01 12:44:41.819453 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-11-01 12:44:54.035689 | orchestrator | 2025-11-01 12:44:54 | INFO  | Task 69f99662-77d9-4014-931f-f05bacb8c035 (reboot) was prepared for execution. 2025-11-01 12:44:54.035790 | orchestrator | 2025-11-01 12:44:54 | INFO  | It takes a moment until task 69f99662-77d9-4014-931f-f05bacb8c035 (reboot) has been started and output is visible here. 2025-11-01 12:45:05.136144 | orchestrator | 2025-11-01 12:45:05.136281 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-01 12:45:05.136299 | orchestrator | 2025-11-01 12:45:05.136311 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-01 12:45:05.136322 | orchestrator | Saturday 01 November 2025 12:44:58 +0000 (0:00:00.252) 0:00:00.252 ***** 2025-11-01 12:45:05.136333 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:45:05.136345 | orchestrator | 2025-11-01 12:45:05.136356 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-01 12:45:05.136367 | orchestrator | Saturday 01 November 2025 12:44:58 +0000 (0:00:00.112) 0:00:00.365 ***** 2025-11-01 12:45:05.136378 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:45:05.136389 | orchestrator | 2025-11-01 12:45:05.136400 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-01 12:45:05.136411 | orchestrator | Saturday 01 November 2025 12:44:59 +0000 (0:00:00.989) 0:00:01.354 ***** 2025-11-01 12:45:05.136422 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:45:05.136433 | orchestrator | 2025-11-01 12:45:05.136444 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-01 12:45:05.136455 | orchestrator | 2025-11-01 12:45:05.136466 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-01 12:45:05.136477 | orchestrator | Saturday 01 November 2025 12:45:00 +0000 (0:00:00.137) 0:00:01.491 ***** 2025-11-01 12:45:05.136488 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:45:05.136519 | orchestrator | 2025-11-01 12:45:05.136531 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-01 12:45:05.136542 | orchestrator | Saturday 01 November 2025 12:45:00 +0000 (0:00:00.147) 0:00:01.639 ***** 2025-11-01 12:45:05.136553 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:45:05.136564 | orchestrator | 2025-11-01 12:45:05.136575 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-01 12:45:05.136586 | orchestrator | Saturday 01 November 2025 12:45:00 +0000 (0:00:00.672) 0:00:02.312 ***** 2025-11-01 12:45:05.136597 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:45:05.136607 | orchestrator | 2025-11-01 12:45:05.136618 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-01 12:45:05.136629 | orchestrator | 2025-11-01 12:45:05.136640 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-01 12:45:05.136651 | orchestrator | Saturday 01 November 2025 12:45:01 +0000 (0:00:00.108) 0:00:02.420 ***** 2025-11-01 12:45:05.136662 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:45:05.136672 | orchestrator | 2025-11-01 12:45:05.136683 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-01 12:45:05.136694 | orchestrator | Saturday 01 November 2025 12:45:01 +0000 (0:00:00.252) 0:00:02.672 ***** 2025-11-01 12:45:05.136705 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:45:05.136716 | orchestrator | 2025-11-01 12:45:05.136726 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-01 12:45:05.136737 | orchestrator | Saturday 01 November 2025 12:45:01 +0000 (0:00:00.684) 0:00:03.356 ***** 2025-11-01 12:45:05.136748 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:45:05.136759 | orchestrator | 2025-11-01 12:45:05.136769 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-01 12:45:05.136780 | orchestrator | 2025-11-01 12:45:05.136791 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-01 12:45:05.136801 | orchestrator | Saturday 01 November 2025 12:45:02 +0000 (0:00:00.114) 0:00:03.471 ***** 2025-11-01 12:45:05.136812 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:45:05.136823 | orchestrator | 2025-11-01 12:45:05.136834 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-01 12:45:05.136846 | orchestrator | Saturday 01 November 2025 12:45:02 +0000 (0:00:00.099) 0:00:03.570 ***** 2025-11-01 12:45:05.136857 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:45:05.136868 | orchestrator | 2025-11-01 12:45:05.136878 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-01 12:45:05.136889 | orchestrator | Saturday 01 November 2025 12:45:02 +0000 (0:00:00.670) 0:00:04.241 ***** 2025-11-01 12:45:05.136900 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:45:05.136911 | orchestrator | 2025-11-01 12:45:05.136922 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-01 12:45:05.136933 | orchestrator | 2025-11-01 12:45:05.136943 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-01 12:45:05.136954 | orchestrator | Saturday 01 November 2025 12:45:02 +0000 (0:00:00.121) 0:00:04.363 ***** 2025-11-01 12:45:05.136965 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:45:05.136976 | orchestrator | 2025-11-01 12:45:05.136987 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-01 12:45:05.136998 | orchestrator | Saturday 01 November 2025 12:45:03 +0000 (0:00:00.123) 0:00:04.487 ***** 2025-11-01 12:45:05.137009 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:45:05.137019 | orchestrator | 2025-11-01 12:45:05.137030 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-01 12:45:05.137041 | orchestrator | Saturday 01 November 2025 12:45:03 +0000 (0:00:00.674) 0:00:05.161 ***** 2025-11-01 12:45:05.137052 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:45:05.137063 | orchestrator | 2025-11-01 12:45:05.137073 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-11-01 12:45:05.137091 | orchestrator | 2025-11-01 12:45:05.137102 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-11-01 12:45:05.137113 | orchestrator | Saturday 01 November 2025 12:45:03 +0000 (0:00:00.131) 0:00:05.293 ***** 2025-11-01 12:45:05.137124 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:45:05.137135 | orchestrator | 2025-11-01 12:45:05.137145 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-11-01 12:45:05.137156 | orchestrator | Saturday 01 November 2025 12:45:04 +0000 (0:00:00.128) 0:00:05.421 ***** 2025-11-01 12:45:05.137181 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:45:05.137192 | orchestrator | 2025-11-01 12:45:05.137220 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-11-01 12:45:05.137232 | orchestrator | Saturday 01 November 2025 12:45:04 +0000 (0:00:00.656) 0:00:06.078 ***** 2025-11-01 12:45:05.137258 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:45:05.137270 | orchestrator | 2025-11-01 12:45:05.137281 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:45:05.137293 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:45:05.137304 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:45:05.137315 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:45:05.137326 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:45:05.137337 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:45:05.137348 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:45:05.137359 | orchestrator | 2025-11-01 12:45:05.137370 | orchestrator | 2025-11-01 12:45:05.137381 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:45:05.137392 | orchestrator | Saturday 01 November 2025 12:45:04 +0000 (0:00:00.041) 0:00:06.119 ***** 2025-11-01 12:45:05.137403 | orchestrator | =============================================================================== 2025-11-01 12:45:05.137414 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.35s 2025-11-01 12:45:05.137425 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.86s 2025-11-01 12:45:05.137436 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.66s 2025-11-01 12:45:05.534656 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-11-01 12:45:17.854111 | orchestrator | 2025-11-01 12:45:17 | INFO  | Task 9dcac492-a6ec-4e07-9c13-8e167dfd0285 (wait-for-connection) was prepared for execution. 2025-11-01 12:45:17.854266 | orchestrator | 2025-11-01 12:45:17 | INFO  | It takes a moment until task 9dcac492-a6ec-4e07-9c13-8e167dfd0285 (wait-for-connection) has been started and output is visible here. 2025-11-01 12:45:34.850721 | orchestrator | 2025-11-01 12:45:34.850829 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-11-01 12:45:34.850845 | orchestrator | 2025-11-01 12:45:34.850856 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-11-01 12:45:34.850867 | orchestrator | Saturday 01 November 2025 12:45:22 +0000 (0:00:00.291) 0:00:00.291 ***** 2025-11-01 12:45:34.850877 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:45:34.850888 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:45:34.850898 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:45:34.850908 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:45:34.850917 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:45:34.850949 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:45:34.850960 | orchestrator | 2025-11-01 12:45:34.850969 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:45:34.850980 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:45:34.850991 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:45:34.851001 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:45:34.851011 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:45:34.851020 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:45:34.851030 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:45:34.851039 | orchestrator | 2025-11-01 12:45:34.851049 | orchestrator | 2025-11-01 12:45:34.851058 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:45:34.851068 | orchestrator | Saturday 01 November 2025 12:45:34 +0000 (0:00:11.638) 0:00:11.929 ***** 2025-11-01 12:45:34.851078 | orchestrator | =============================================================================== 2025-11-01 12:45:34.851087 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.64s 2025-11-01 12:45:35.202788 | orchestrator | + osism apply hddtemp 2025-11-01 12:45:47.667647 | orchestrator | 2025-11-01 12:45:47 | INFO  | Task 64df327f-dd5f-4579-98ad-783da11ae057 (hddtemp) was prepared for execution. 2025-11-01 12:45:47.667751 | orchestrator | 2025-11-01 12:45:47 | INFO  | It takes a moment until task 64df327f-dd5f-4579-98ad-783da11ae057 (hddtemp) has been started and output is visible here. 2025-11-01 12:46:18.194007 | orchestrator | 2025-11-01 12:46:18.194140 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-11-01 12:46:18.194154 | orchestrator | 2025-11-01 12:46:18.194164 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-11-01 12:46:18.194175 | orchestrator | Saturday 01 November 2025 12:45:52 +0000 (0:00:00.287) 0:00:00.287 ***** 2025-11-01 12:46:18.194185 | orchestrator | ok: [testbed-manager] 2025-11-01 12:46:18.194242 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:46:18.194253 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:46:18.194262 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:46:18.194272 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:46:18.194282 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:46:18.194291 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:46:18.194301 | orchestrator | 2025-11-01 12:46:18.194311 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-11-01 12:46:18.194321 | orchestrator | Saturday 01 November 2025 12:45:53 +0000 (0:00:00.832) 0:00:01.119 ***** 2025-11-01 12:46:18.194332 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 12:46:18.194344 | orchestrator | 2025-11-01 12:46:18.194354 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-11-01 12:46:18.194364 | orchestrator | Saturday 01 November 2025 12:45:54 +0000 (0:00:01.327) 0:00:02.446 ***** 2025-11-01 12:46:18.194374 | orchestrator | ok: [testbed-manager] 2025-11-01 12:46:18.194383 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:46:18.194393 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:46:18.194403 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:46:18.194412 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:46:18.194422 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:46:18.194453 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:46:18.194463 | orchestrator | 2025-11-01 12:46:18.194473 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-11-01 12:46:18.194483 | orchestrator | Saturday 01 November 2025 12:45:56 +0000 (0:00:02.202) 0:00:04.649 ***** 2025-11-01 12:46:18.194492 | orchestrator | changed: [testbed-manager] 2025-11-01 12:46:18.194502 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:46:18.194512 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:46:18.194522 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:46:18.194531 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:46:18.194541 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:46:18.194550 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:46:18.194559 | orchestrator | 2025-11-01 12:46:18.194569 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-11-01 12:46:18.194581 | orchestrator | Saturday 01 November 2025 12:45:58 +0000 (0:00:01.240) 0:00:05.889 ***** 2025-11-01 12:46:18.194592 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:46:18.194603 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:46:18.194614 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:46:18.194625 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:46:18.194635 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:46:18.194646 | orchestrator | ok: [testbed-manager] 2025-11-01 12:46:18.194658 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:46:18.194669 | orchestrator | 2025-11-01 12:46:18.194680 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-11-01 12:46:18.194691 | orchestrator | Saturday 01 November 2025 12:45:59 +0000 (0:00:01.219) 0:00:07.109 ***** 2025-11-01 12:46:18.194701 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:46:18.194713 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:46:18.194724 | orchestrator | changed: [testbed-manager] 2025-11-01 12:46:18.194734 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:46:18.194745 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:46:18.194756 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:46:18.194767 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:46:18.194778 | orchestrator | 2025-11-01 12:46:18.194789 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-11-01 12:46:18.194799 | orchestrator | Saturday 01 November 2025 12:46:00 +0000 (0:00:00.936) 0:00:08.045 ***** 2025-11-01 12:46:18.194810 | orchestrator | changed: [testbed-manager] 2025-11-01 12:46:18.194820 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:46:18.194831 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:46:18.194842 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:46:18.194853 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:46:18.194863 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:46:18.194874 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:46:18.194885 | orchestrator | 2025-11-01 12:46:18.194896 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-11-01 12:46:18.194906 | orchestrator | Saturday 01 November 2025 12:46:14 +0000 (0:00:13.855) 0:00:21.901 ***** 2025-11-01 12:46:18.194918 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 12:46:18.194930 | orchestrator | 2025-11-01 12:46:18.194941 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-11-01 12:46:18.194952 | orchestrator | Saturday 01 November 2025 12:46:15 +0000 (0:00:01.485) 0:00:23.386 ***** 2025-11-01 12:46:18.194961 | orchestrator | changed: [testbed-manager] 2025-11-01 12:46:18.194970 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:46:18.194980 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:46:18.194989 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:46:18.194999 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:46:18.195008 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:46:18.195024 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:46:18.195034 | orchestrator | 2025-11-01 12:46:18.195043 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:46:18.195065 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:46:18.195092 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 12:46:18.195102 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 12:46:18.195112 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 12:46:18.195122 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 12:46:18.195132 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 12:46:18.195142 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 12:46:18.195151 | orchestrator | 2025-11-01 12:46:18.195161 | orchestrator | 2025-11-01 12:46:18.195171 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:46:18.195181 | orchestrator | Saturday 01 November 2025 12:46:17 +0000 (0:00:02.056) 0:00:25.443 ***** 2025-11-01 12:46:18.195208 | orchestrator | =============================================================================== 2025-11-01 12:46:18.195219 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.86s 2025-11-01 12:46:18.195228 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.20s 2025-11-01 12:46:18.195238 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.06s 2025-11-01 12:46:18.195247 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.49s 2025-11-01 12:46:18.195257 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.33s 2025-11-01 12:46:18.195267 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.24s 2025-11-01 12:46:18.195276 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.22s 2025-11-01 12:46:18.195286 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.94s 2025-11-01 12:46:18.195296 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.83s 2025-11-01 12:46:18.609460 | orchestrator | ++ semver latest 7.1.1 2025-11-01 12:46:18.651305 | orchestrator | + [[ -1 -ge 0 ]] 2025-11-01 12:46:18.651341 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-01 12:46:18.651352 | orchestrator | + sudo systemctl restart manager.service 2025-11-01 12:46:32.086410 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-11-01 12:46:32.086485 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-11-01 12:46:32.086499 | orchestrator | + local max_attempts=60 2025-11-01 12:46:32.086510 | orchestrator | + local name=ceph-ansible 2025-11-01 12:46:32.086521 | orchestrator | + local attempt_num=1 2025-11-01 12:46:32.086533 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 12:46:32.125537 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-01 12:46:32.125593 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 12:46:32.125606 | orchestrator | + sleep 5 2025-11-01 12:46:37.130565 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 12:46:37.228032 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-01 12:46:37.228072 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 12:46:37.228084 | orchestrator | + sleep 5 2025-11-01 12:46:42.234577 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 12:46:42.261350 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-01 12:46:42.261461 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 12:46:42.261476 | orchestrator | + sleep 5 2025-11-01 12:46:47.264669 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 12:46:47.296975 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-01 12:46:47.297012 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 12:46:47.297021 | orchestrator | + sleep 5 2025-11-01 12:46:52.301630 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 12:46:52.341979 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-01 12:46:52.342083 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 12:46:52.342098 | orchestrator | + sleep 5 2025-11-01 12:46:57.347873 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 12:46:57.385353 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-01 12:46:57.385422 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 12:46:57.385437 | orchestrator | + sleep 5 2025-11-01 12:47:02.389829 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 12:47:02.428867 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-11-01 12:47:02.428921 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 12:47:02.428936 | orchestrator | + sleep 5 2025-11-01 12:47:07.433281 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 12:47:07.504536 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-01 12:47:07.504623 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 12:47:07.504637 | orchestrator | + sleep 5 2025-11-01 12:47:12.508228 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 12:47:12.664342 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-01 12:47:12.664427 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 12:47:12.664441 | orchestrator | + sleep 5 2025-11-01 12:47:17.666932 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 12:47:17.705115 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-01 12:47:17.705149 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 12:47:17.705161 | orchestrator | + sleep 5 2025-11-01 12:47:22.710536 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 12:47:22.746050 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-01 12:47:22.746083 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 12:47:22.746095 | orchestrator | + sleep 5 2025-11-01 12:47:27.752571 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 12:47:27.785036 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-01 12:47:27.785084 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 12:47:27.785111 | orchestrator | + sleep 5 2025-11-01 12:47:32.789074 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 12:47:32.828896 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-11-01 12:47:32.828953 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-11-01 12:47:32.828966 | orchestrator | + sleep 5 2025-11-01 12:47:37.832774 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-11-01 12:47:37.875542 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-01 12:47:37.875590 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-11-01 12:47:37.875603 | orchestrator | + local max_attempts=60 2025-11-01 12:47:37.875615 | orchestrator | + local name=kolla-ansible 2025-11-01 12:47:37.875626 | orchestrator | + local attempt_num=1 2025-11-01 12:47:37.876873 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-11-01 12:47:37.922771 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-01 12:47:37.922794 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-11-01 12:47:37.922805 | orchestrator | + local max_attempts=60 2025-11-01 12:47:37.922816 | orchestrator | + local name=osism-ansible 2025-11-01 12:47:37.922827 | orchestrator | + local attempt_num=1 2025-11-01 12:47:37.923142 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-11-01 12:47:37.960490 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-11-01 12:47:37.960515 | orchestrator | + [[ true == \t\r\u\e ]] 2025-11-01 12:47:37.960527 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-11-01 12:47:38.114774 | orchestrator | ARA in ceph-ansible already disabled. 2025-11-01 12:47:38.253847 | orchestrator | ARA in kolla-ansible already disabled. 2025-11-01 12:47:38.405225 | orchestrator | ARA in osism-ansible already disabled. 2025-11-01 12:47:38.559811 | orchestrator | ARA in osism-kubernetes already disabled. 2025-11-01 12:47:38.561308 | orchestrator | + osism apply gather-facts 2025-11-01 12:47:51.288777 | orchestrator | 2025-11-01 12:47:51 | INFO  | Task f627b10e-f793-4075-af85-bdded7f6d576 (gather-facts) was prepared for execution. 2025-11-01 12:47:51.288872 | orchestrator | 2025-11-01 12:47:51 | INFO  | It takes a moment until task f627b10e-f793-4075-af85-bdded7f6d576 (gather-facts) has been started and output is visible here. 2025-11-01 12:48:05.853549 | orchestrator | 2025-11-01 12:48:05.853653 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-01 12:48:05.853671 | orchestrator | 2025-11-01 12:48:05.853683 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-01 12:48:05.853695 | orchestrator | Saturday 01 November 2025 12:47:56 +0000 (0:00:00.235) 0:00:00.235 ***** 2025-11-01 12:48:05.853706 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:48:05.853718 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:48:05.853729 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:48:05.853740 | orchestrator | ok: [testbed-manager] 2025-11-01 12:48:05.853751 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:48:05.853761 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:48:05.853772 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:48:05.853783 | orchestrator | 2025-11-01 12:48:05.853794 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-11-01 12:48:05.853805 | orchestrator | 2025-11-01 12:48:05.853816 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-11-01 12:48:05.853827 | orchestrator | Saturday 01 November 2025 12:48:04 +0000 (0:00:08.697) 0:00:08.933 ***** 2025-11-01 12:48:05.853838 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:48:05.853849 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:48:05.853860 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:48:05.853871 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:48:05.853882 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:48:05.853893 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:48:05.853903 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:48:05.853914 | orchestrator | 2025-11-01 12:48:05.853925 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:48:05.853937 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 12:48:05.853949 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 12:48:05.853960 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 12:48:05.853971 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 12:48:05.853982 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 12:48:05.853993 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 12:48:05.854004 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 12:48:05.854062 | orchestrator | 2025-11-01 12:48:05.854075 | orchestrator | 2025-11-01 12:48:05.854087 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:48:05.854098 | orchestrator | Saturday 01 November 2025 12:48:05 +0000 (0:00:00.630) 0:00:09.563 ***** 2025-11-01 12:48:05.854109 | orchestrator | =============================================================================== 2025-11-01 12:48:05.854120 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.70s 2025-11-01 12:48:05.854156 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.63s 2025-11-01 12:48:06.216855 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-11-01 12:48:06.236776 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-11-01 12:48:06.249564 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-11-01 12:48:06.263526 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-11-01 12:48:06.277693 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-11-01 12:48:06.292822 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-11-01 12:48:06.307758 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-11-01 12:48:06.329292 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-11-01 12:48:06.345963 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-11-01 12:48:06.360049 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-11-01 12:48:06.375001 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-11-01 12:48:06.390082 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-11-01 12:48:06.407919 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-11-01 12:48:06.427837 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-11-01 12:48:06.446434 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-11-01 12:48:06.470269 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-11-01 12:48:06.489514 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-11-01 12:48:06.508675 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-11-01 12:48:06.521270 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-11-01 12:48:06.533660 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-11-01 12:48:06.556903 | orchestrator | + [[ false == \t\r\u\e ]] 2025-11-01 12:48:06.695289 | orchestrator | ok: Runtime: 0:24:34.556642 2025-11-01 12:48:06.803659 | 2025-11-01 12:48:06.803814 | TASK [Deploy services] 2025-11-01 12:48:07.335759 | orchestrator | skipping: Conditional result was False 2025-11-01 12:48:07.353310 | 2025-11-01 12:48:07.353473 | TASK [Deploy in a nutshell] 2025-11-01 12:48:08.022324 | orchestrator | + set -e 2025-11-01 12:48:08.022474 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-01 12:48:08.023263 | orchestrator | 2025-11-01 12:48:08.023285 | orchestrator | # PULL IMAGES 2025-11-01 12:48:08.023300 | orchestrator | 2025-11-01 12:48:08.023322 | orchestrator | ++ export INTERACTIVE=false 2025-11-01 12:48:08.023337 | orchestrator | ++ INTERACTIVE=false 2025-11-01 12:48:08.023380 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-01 12:48:08.023401 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-01 12:48:08.023416 | orchestrator | + source /opt/manager-vars.sh 2025-11-01 12:48:08.023430 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-01 12:48:08.023449 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-01 12:48:08.023463 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-01 12:48:08.023481 | orchestrator | ++ CEPH_VERSION=reef 2025-11-01 12:48:08.023494 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-01 12:48:08.023514 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-01 12:48:08.023527 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-01 12:48:08.023542 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-01 12:48:08.023553 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-01 12:48:08.023568 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-01 12:48:08.023579 | orchestrator | ++ export ARA=false 2025-11-01 12:48:08.023590 | orchestrator | ++ ARA=false 2025-11-01 12:48:08.023601 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-01 12:48:08.023612 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-01 12:48:08.023623 | orchestrator | ++ export TEMPEST=false 2025-11-01 12:48:08.023633 | orchestrator | ++ TEMPEST=false 2025-11-01 12:48:08.023644 | orchestrator | ++ export IS_ZUUL=true 2025-11-01 12:48:08.023654 | orchestrator | ++ IS_ZUUL=true 2025-11-01 12:48:08.023665 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-11-01 12:48:08.023677 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-11-01 12:48:08.023687 | orchestrator | ++ export EXTERNAL_API=false 2025-11-01 12:48:08.023698 | orchestrator | ++ EXTERNAL_API=false 2025-11-01 12:48:08.023708 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-01 12:48:08.023719 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-01 12:48:08.023730 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-01 12:48:08.023741 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-01 12:48:08.023751 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-01 12:48:08.023762 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-01 12:48:08.023773 | orchestrator | + echo 2025-11-01 12:48:08.023790 | orchestrator | + echo '# PULL IMAGES' 2025-11-01 12:48:08.023801 | orchestrator | + echo 2025-11-01 12:48:08.024229 | orchestrator | ++ semver latest 7.0.0 2025-11-01 12:48:08.075910 | orchestrator | + [[ -1 -ge 0 ]] 2025-11-01 12:48:08.075940 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-01 12:48:08.075952 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-11-01 12:48:10.285800 | orchestrator | 2025-11-01 12:48:10 | INFO  | Trying to run play pull-images in environment custom 2025-11-01 12:48:20.403359 | orchestrator | 2025-11-01 12:48:20 | INFO  | Task ca22642a-3ac3-4ab6-9f68-72eeeb33376b (pull-images) was prepared for execution. 2025-11-01 12:48:20.403417 | orchestrator | 2025-11-01 12:48:20 | INFO  | Task ca22642a-3ac3-4ab6-9f68-72eeeb33376b is running in background. No more output. Check ARA for logs. 2025-11-01 12:48:23.061109 | orchestrator | 2025-11-01 12:48:23 | INFO  | Trying to run play wipe-partitions in environment custom 2025-11-01 12:48:33.168717 | orchestrator | 2025-11-01 12:48:33 | INFO  | Task 7ef0148a-c03f-4346-bc6f-abd73f2b022e (wipe-partitions) was prepared for execution. 2025-11-01 12:48:33.168821 | orchestrator | 2025-11-01 12:48:33 | INFO  | It takes a moment until task 7ef0148a-c03f-4346-bc6f-abd73f2b022e (wipe-partitions) has been started and output is visible here. 2025-11-01 12:48:46.798994 | orchestrator | 2025-11-01 12:48:46.799102 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-11-01 12:48:46.799117 | orchestrator | 2025-11-01 12:48:46.799127 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-11-01 12:48:46.799142 | orchestrator | Saturday 01 November 2025 12:48:38 +0000 (0:00:00.173) 0:00:00.173 ***** 2025-11-01 12:48:46.799152 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:48:46.799162 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:48:46.799172 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:48:46.799229 | orchestrator | 2025-11-01 12:48:46.799240 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-11-01 12:48:46.799274 | orchestrator | Saturday 01 November 2025 12:48:38 +0000 (0:00:00.622) 0:00:00.796 ***** 2025-11-01 12:48:46.799284 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:48:46.799294 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:48:46.799308 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:48:46.799317 | orchestrator | 2025-11-01 12:48:46.799328 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-11-01 12:48:46.799337 | orchestrator | Saturday 01 November 2025 12:48:39 +0000 (0:00:00.451) 0:00:01.248 ***** 2025-11-01 12:48:46.799347 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:48:46.799358 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:48:46.799367 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:48:46.799376 | orchestrator | 2025-11-01 12:48:46.799386 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-11-01 12:48:46.799395 | orchestrator | Saturday 01 November 2025 12:48:39 +0000 (0:00:00.595) 0:00:01.844 ***** 2025-11-01 12:48:46.799405 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:48:46.799415 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:48:46.799424 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:48:46.799433 | orchestrator | 2025-11-01 12:48:46.799443 | orchestrator | TASK [Check device availability] *********************************************** 2025-11-01 12:48:46.799452 | orchestrator | Saturday 01 November 2025 12:48:40 +0000 (0:00:00.301) 0:00:02.145 ***** 2025-11-01 12:48:46.799462 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-11-01 12:48:46.799475 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-11-01 12:48:46.799485 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-11-01 12:48:46.799494 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-11-01 12:48:46.799503 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-11-01 12:48:46.799513 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-11-01 12:48:46.799522 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-11-01 12:48:46.799532 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-11-01 12:48:46.799541 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-11-01 12:48:46.799550 | orchestrator | 2025-11-01 12:48:46.799560 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-11-01 12:48:46.799570 | orchestrator | Saturday 01 November 2025 12:48:41 +0000 (0:00:01.222) 0:00:03.367 ***** 2025-11-01 12:48:46.799580 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-11-01 12:48:46.799590 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-11-01 12:48:46.799599 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-11-01 12:48:46.799608 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-11-01 12:48:46.799618 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-11-01 12:48:46.799627 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-11-01 12:48:46.799636 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-11-01 12:48:46.799646 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-11-01 12:48:46.799655 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-11-01 12:48:46.799665 | orchestrator | 2025-11-01 12:48:46.799674 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-11-01 12:48:46.799684 | orchestrator | Saturday 01 November 2025 12:48:42 +0000 (0:00:01.590) 0:00:04.957 ***** 2025-11-01 12:48:46.799693 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-11-01 12:48:46.799703 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-11-01 12:48:46.799712 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-11-01 12:48:46.799721 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-11-01 12:48:46.799731 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-11-01 12:48:46.799740 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-11-01 12:48:46.799749 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-11-01 12:48:46.799766 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-11-01 12:48:46.799781 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-11-01 12:48:46.799791 | orchestrator | 2025-11-01 12:48:46.799801 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-11-01 12:48:46.799810 | orchestrator | Saturday 01 November 2025 12:48:45 +0000 (0:00:02.110) 0:00:07.068 ***** 2025-11-01 12:48:46.799820 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:48:46.799829 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:48:46.799839 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:48:46.799848 | orchestrator | 2025-11-01 12:48:46.799858 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-11-01 12:48:46.799867 | orchestrator | Saturday 01 November 2025 12:48:45 +0000 (0:00:00.615) 0:00:07.683 ***** 2025-11-01 12:48:46.799877 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:48:46.799886 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:48:46.799896 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:48:46.799905 | orchestrator | 2025-11-01 12:48:46.799914 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:48:46.799926 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:48:46.799937 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:48:46.799963 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:48:46.799973 | orchestrator | 2025-11-01 12:48:46.799983 | orchestrator | 2025-11-01 12:48:46.799993 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:48:46.800003 | orchestrator | Saturday 01 November 2025 12:48:46 +0000 (0:00:00.657) 0:00:08.340 ***** 2025-11-01 12:48:46.800012 | orchestrator | =============================================================================== 2025-11-01 12:48:46.800022 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.11s 2025-11-01 12:48:46.800031 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.59s 2025-11-01 12:48:46.800041 | orchestrator | Check device availability ----------------------------------------------- 1.22s 2025-11-01 12:48:46.800050 | orchestrator | Request device events from the kernel ----------------------------------- 0.66s 2025-11-01 12:48:46.800060 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.62s 2025-11-01 12:48:46.800069 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2025-11-01 12:48:46.800079 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.60s 2025-11-01 12:48:46.800088 | orchestrator | Remove all rook related logical devices --------------------------------- 0.45s 2025-11-01 12:48:46.800098 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.30s 2025-11-01 12:48:59.584463 | orchestrator | 2025-11-01 12:48:59 | INFO  | Task e109b6ff-bca4-4e48-b17e-74525e64c520 (facts) was prepared for execution. 2025-11-01 12:48:59.584550 | orchestrator | 2025-11-01 12:48:59 | INFO  | It takes a moment until task e109b6ff-bca4-4e48-b17e-74525e64c520 (facts) has been started and output is visible here. 2025-11-01 12:49:12.861010 | orchestrator | 2025-11-01 12:49:12.861109 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-11-01 12:49:12.861126 | orchestrator | 2025-11-01 12:49:12.861139 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-11-01 12:49:12.861151 | orchestrator | Saturday 01 November 2025 12:49:04 +0000 (0:00:00.282) 0:00:00.282 ***** 2025-11-01 12:49:12.861162 | orchestrator | ok: [testbed-manager] 2025-11-01 12:49:12.861174 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:49:12.861228 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:49:12.861264 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:49:12.861275 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:49:12.861286 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:49:12.861296 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:49:12.861307 | orchestrator | 2025-11-01 12:49:12.861318 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-11-01 12:49:12.861329 | orchestrator | Saturday 01 November 2025 12:49:05 +0000 (0:00:01.148) 0:00:01.431 ***** 2025-11-01 12:49:12.861339 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:49:12.861351 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:49:12.861361 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:49:12.861372 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:49:12.861382 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:12.861393 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:12.861404 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:49:12.861414 | orchestrator | 2025-11-01 12:49:12.861425 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-01 12:49:12.861435 | orchestrator | 2025-11-01 12:49:12.861460 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-01 12:49:12.861471 | orchestrator | Saturday 01 November 2025 12:49:06 +0000 (0:00:01.376) 0:00:02.807 ***** 2025-11-01 12:49:12.861482 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:49:12.861493 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:49:12.861505 | orchestrator | ok: [testbed-manager] 2025-11-01 12:49:12.861516 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:49:12.861526 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:49:12.861537 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:49:12.861548 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:49:12.861558 | orchestrator | 2025-11-01 12:49:12.861571 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-11-01 12:49:12.861584 | orchestrator | 2025-11-01 12:49:12.861596 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-11-01 12:49:12.861609 | orchestrator | Saturday 01 November 2025 12:49:11 +0000 (0:00:05.042) 0:00:07.849 ***** 2025-11-01 12:49:12.861621 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:49:12.861634 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:49:12.861645 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:49:12.861658 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:49:12.861670 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:12.861682 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:12.861694 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:49:12.861706 | orchestrator | 2025-11-01 12:49:12.861718 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:49:12.861731 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:49:12.861746 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:49:12.861758 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:49:12.861771 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:49:12.861783 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:49:12.861796 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:49:12.861808 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:49:12.861821 | orchestrator | 2025-11-01 12:49:12.861841 | orchestrator | 2025-11-01 12:49:12.861854 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:49:12.861867 | orchestrator | Saturday 01 November 2025 12:49:12 +0000 (0:00:00.591) 0:00:08.441 ***** 2025-11-01 12:49:12.861879 | orchestrator | =============================================================================== 2025-11-01 12:49:12.861891 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.04s 2025-11-01 12:49:12.861904 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.38s 2025-11-01 12:49:12.861917 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.15s 2025-11-01 12:49:12.861928 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2025-11-01 12:49:15.603476 | orchestrator | 2025-11-01 12:49:15 | INFO  | Task 870df899-01b5-4b39-955a-ec757cf7590f (ceph-configure-lvm-volumes) was prepared for execution. 2025-11-01 12:49:15.603566 | orchestrator | 2025-11-01 12:49:15 | INFO  | It takes a moment until task 870df899-01b5-4b39-955a-ec757cf7590f (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-11-01 12:49:29.150600 | orchestrator | 2025-11-01 12:49:29.150691 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-11-01 12:49:29.150703 | orchestrator | 2025-11-01 12:49:29.150711 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-01 12:49:29.150720 | orchestrator | Saturday 01 November 2025 12:49:20 +0000 (0:00:00.402) 0:00:00.402 ***** 2025-11-01 12:49:29.150729 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 12:49:29.150737 | orchestrator | 2025-11-01 12:49:29.150745 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-01 12:49:29.150753 | orchestrator | Saturday 01 November 2025 12:49:21 +0000 (0:00:00.260) 0:00:00.663 ***** 2025-11-01 12:49:29.150760 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:49:29.150769 | orchestrator | 2025-11-01 12:49:29.150777 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:29.150785 | orchestrator | Saturday 01 November 2025 12:49:21 +0000 (0:00:00.270) 0:00:00.934 ***** 2025-11-01 12:49:29.150793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-11-01 12:49:29.150801 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-11-01 12:49:29.150809 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-11-01 12:49:29.150825 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-11-01 12:49:29.150833 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-11-01 12:49:29.150841 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-11-01 12:49:29.150849 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-11-01 12:49:29.150856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-11-01 12:49:29.150864 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-11-01 12:49:29.150872 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-11-01 12:49:29.150880 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-11-01 12:49:29.150887 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-11-01 12:49:29.150895 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-11-01 12:49:29.150903 | orchestrator | 2025-11-01 12:49:29.150910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:29.150918 | orchestrator | Saturday 01 November 2025 12:49:21 +0000 (0:00:00.531) 0:00:01.466 ***** 2025-11-01 12:49:29.150926 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:29.150954 | orchestrator | 2025-11-01 12:49:29.150962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:29.150970 | orchestrator | Saturday 01 November 2025 12:49:22 +0000 (0:00:00.204) 0:00:01.670 ***** 2025-11-01 12:49:29.150977 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:29.150985 | orchestrator | 2025-11-01 12:49:29.150993 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:29.151001 | orchestrator | Saturday 01 November 2025 12:49:22 +0000 (0:00:00.213) 0:00:01.884 ***** 2025-11-01 12:49:29.151008 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:29.151016 | orchestrator | 2025-11-01 12:49:29.151024 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:29.151032 | orchestrator | Saturday 01 November 2025 12:49:22 +0000 (0:00:00.208) 0:00:02.093 ***** 2025-11-01 12:49:29.151040 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:29.151050 | orchestrator | 2025-11-01 12:49:29.151058 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:29.151066 | orchestrator | Saturday 01 November 2025 12:49:22 +0000 (0:00:00.198) 0:00:02.291 ***** 2025-11-01 12:49:29.151073 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:29.151081 | orchestrator | 2025-11-01 12:49:29.151089 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:29.151097 | orchestrator | Saturday 01 November 2025 12:49:22 +0000 (0:00:00.226) 0:00:02.518 ***** 2025-11-01 12:49:29.151105 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:29.151113 | orchestrator | 2025-11-01 12:49:29.151121 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:29.151128 | orchestrator | Saturday 01 November 2025 12:49:23 +0000 (0:00:00.226) 0:00:02.745 ***** 2025-11-01 12:49:29.151136 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:29.151144 | orchestrator | 2025-11-01 12:49:29.151153 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:29.151162 | orchestrator | Saturday 01 November 2025 12:49:23 +0000 (0:00:00.225) 0:00:02.971 ***** 2025-11-01 12:49:29.151171 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:29.151210 | orchestrator | 2025-11-01 12:49:29.151220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:29.151229 | orchestrator | Saturday 01 November 2025 12:49:23 +0000 (0:00:00.202) 0:00:03.173 ***** 2025-11-01 12:49:29.151237 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11) 2025-11-01 12:49:29.151248 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11) 2025-11-01 12:49:29.151256 | orchestrator | 2025-11-01 12:49:29.151265 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:29.151274 | orchestrator | Saturday 01 November 2025 12:49:24 +0000 (0:00:00.433) 0:00:03.607 ***** 2025-11-01 12:49:29.151296 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6d5232a6-49c3-4ba2-8072-69b94c6f6826) 2025-11-01 12:49:29.151306 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6d5232a6-49c3-4ba2-8072-69b94c6f6826) 2025-11-01 12:49:29.151315 | orchestrator | 2025-11-01 12:49:29.151324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:29.151333 | orchestrator | Saturday 01 November 2025 12:49:24 +0000 (0:00:00.717) 0:00:04.324 ***** 2025-11-01 12:49:29.151346 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_caf17145-8e33-4113-9dc7-3e1268f339ef) 2025-11-01 12:49:29.151355 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_caf17145-8e33-4113-9dc7-3e1268f339ef) 2025-11-01 12:49:29.151365 | orchestrator | 2025-11-01 12:49:29.151374 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:29.151383 | orchestrator | Saturday 01 November 2025 12:49:25 +0000 (0:00:00.742) 0:00:05.067 ***** 2025-11-01 12:49:29.151392 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b36ff255-a328-4794-8843-53478b92bf6f) 2025-11-01 12:49:29.151407 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b36ff255-a328-4794-8843-53478b92bf6f) 2025-11-01 12:49:29.151416 | orchestrator | 2025-11-01 12:49:29.151424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:29.151433 | orchestrator | Saturday 01 November 2025 12:49:26 +0000 (0:00:01.035) 0:00:06.102 ***** 2025-11-01 12:49:29.151442 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-01 12:49:29.151451 | orchestrator | 2025-11-01 12:49:29.151460 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:29.151469 | orchestrator | Saturday 01 November 2025 12:49:26 +0000 (0:00:00.354) 0:00:06.457 ***** 2025-11-01 12:49:29.151477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-11-01 12:49:29.151486 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-11-01 12:49:29.151495 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-11-01 12:49:29.151504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-11-01 12:49:29.151512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-11-01 12:49:29.151519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-11-01 12:49:29.151527 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-11-01 12:49:29.151535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-11-01 12:49:29.151542 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-11-01 12:49:29.151550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-11-01 12:49:29.151558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-11-01 12:49:29.151566 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-11-01 12:49:29.151573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-11-01 12:49:29.151581 | orchestrator | 2025-11-01 12:49:29.151589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:29.151597 | orchestrator | Saturday 01 November 2025 12:49:27 +0000 (0:00:00.415) 0:00:06.872 ***** 2025-11-01 12:49:29.151605 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:29.151613 | orchestrator | 2025-11-01 12:49:29.151620 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:29.151628 | orchestrator | Saturday 01 November 2025 12:49:27 +0000 (0:00:00.225) 0:00:07.097 ***** 2025-11-01 12:49:29.151636 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:29.151644 | orchestrator | 2025-11-01 12:49:29.151651 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:29.151659 | orchestrator | Saturday 01 November 2025 12:49:27 +0000 (0:00:00.223) 0:00:07.321 ***** 2025-11-01 12:49:29.151667 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:29.151675 | orchestrator | 2025-11-01 12:49:29.151682 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:29.151690 | orchestrator | Saturday 01 November 2025 12:49:28 +0000 (0:00:00.219) 0:00:07.541 ***** 2025-11-01 12:49:29.151698 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:29.151706 | orchestrator | 2025-11-01 12:49:29.151714 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:29.151721 | orchestrator | Saturday 01 November 2025 12:49:28 +0000 (0:00:00.210) 0:00:07.752 ***** 2025-11-01 12:49:29.151729 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:29.151737 | orchestrator | 2025-11-01 12:49:29.151750 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:29.151758 | orchestrator | Saturday 01 November 2025 12:49:28 +0000 (0:00:00.231) 0:00:07.983 ***** 2025-11-01 12:49:29.151765 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:29.151773 | orchestrator | 2025-11-01 12:49:29.151781 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:29.151789 | orchestrator | Saturday 01 November 2025 12:49:28 +0000 (0:00:00.250) 0:00:08.233 ***** 2025-11-01 12:49:29.151796 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:29.151804 | orchestrator | 2025-11-01 12:49:29.151812 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:29.151820 | orchestrator | Saturday 01 November 2025 12:49:28 +0000 (0:00:00.221) 0:00:08.455 ***** 2025-11-01 12:49:29.151832 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:37.341416 | orchestrator | 2025-11-01 12:49:37.341514 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:37.341529 | orchestrator | Saturday 01 November 2025 12:49:29 +0000 (0:00:00.211) 0:00:08.667 ***** 2025-11-01 12:49:37.341540 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-11-01 12:49:37.341551 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-11-01 12:49:37.341561 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-11-01 12:49:37.341571 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-11-01 12:49:37.341581 | orchestrator | 2025-11-01 12:49:37.341591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:37.341601 | orchestrator | Saturday 01 November 2025 12:49:30 +0000 (0:00:01.222) 0:00:09.889 ***** 2025-11-01 12:49:37.341626 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:37.341636 | orchestrator | 2025-11-01 12:49:37.341646 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:37.341656 | orchestrator | Saturday 01 November 2025 12:49:30 +0000 (0:00:00.248) 0:00:10.138 ***** 2025-11-01 12:49:37.341666 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:37.341675 | orchestrator | 2025-11-01 12:49:37.341685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:37.341695 | orchestrator | Saturday 01 November 2025 12:49:30 +0000 (0:00:00.242) 0:00:10.380 ***** 2025-11-01 12:49:37.341705 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:37.341714 | orchestrator | 2025-11-01 12:49:37.341724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:37.341734 | orchestrator | Saturday 01 November 2025 12:49:31 +0000 (0:00:00.209) 0:00:10.590 ***** 2025-11-01 12:49:37.341744 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:37.341753 | orchestrator | 2025-11-01 12:49:37.341763 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-11-01 12:49:37.341773 | orchestrator | Saturday 01 November 2025 12:49:31 +0000 (0:00:00.199) 0:00:10.789 ***** 2025-11-01 12:49:37.341782 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-11-01 12:49:37.341792 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-11-01 12:49:37.341801 | orchestrator | 2025-11-01 12:49:37.341811 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-11-01 12:49:37.341821 | orchestrator | Saturday 01 November 2025 12:49:31 +0000 (0:00:00.197) 0:00:10.986 ***** 2025-11-01 12:49:37.341830 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:37.341840 | orchestrator | 2025-11-01 12:49:37.341850 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-11-01 12:49:37.341859 | orchestrator | Saturday 01 November 2025 12:49:31 +0000 (0:00:00.141) 0:00:11.127 ***** 2025-11-01 12:49:37.341869 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:37.341878 | orchestrator | 2025-11-01 12:49:37.341888 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-11-01 12:49:37.341898 | orchestrator | Saturday 01 November 2025 12:49:31 +0000 (0:00:00.142) 0:00:11.270 ***** 2025-11-01 12:49:37.341908 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:37.341936 | orchestrator | 2025-11-01 12:49:37.341946 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-11-01 12:49:37.341955 | orchestrator | Saturday 01 November 2025 12:49:31 +0000 (0:00:00.134) 0:00:11.404 ***** 2025-11-01 12:49:37.341967 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:49:37.341978 | orchestrator | 2025-11-01 12:49:37.341989 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-11-01 12:49:37.342000 | orchestrator | Saturday 01 November 2025 12:49:32 +0000 (0:00:00.125) 0:00:11.529 ***** 2025-11-01 12:49:37.342012 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd83d2135-3529-5759-9738-6f5d85bcdaef'}}) 2025-11-01 12:49:37.342068 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2d34deeb-c147-51f6-865b-40ba131b62ad'}}) 2025-11-01 12:49:37.342079 | orchestrator | 2025-11-01 12:49:37.342090 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-11-01 12:49:37.342100 | orchestrator | Saturday 01 November 2025 12:49:32 +0000 (0:00:00.165) 0:00:11.695 ***** 2025-11-01 12:49:37.342112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd83d2135-3529-5759-9738-6f5d85bcdaef'}})  2025-11-01 12:49:37.342131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2d34deeb-c147-51f6-865b-40ba131b62ad'}})  2025-11-01 12:49:37.342143 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:37.342154 | orchestrator | 2025-11-01 12:49:37.342165 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-11-01 12:49:37.342175 | orchestrator | Saturday 01 November 2025 12:49:32 +0000 (0:00:00.167) 0:00:11.863 ***** 2025-11-01 12:49:37.342207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd83d2135-3529-5759-9738-6f5d85bcdaef'}})  2025-11-01 12:49:37.342218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2d34deeb-c147-51f6-865b-40ba131b62ad'}})  2025-11-01 12:49:37.342229 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:37.342240 | orchestrator | 2025-11-01 12:49:37.342251 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-11-01 12:49:37.342262 | orchestrator | Saturday 01 November 2025 12:49:32 +0000 (0:00:00.397) 0:00:12.260 ***** 2025-11-01 12:49:37.342272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd83d2135-3529-5759-9738-6f5d85bcdaef'}})  2025-11-01 12:49:37.342283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2d34deeb-c147-51f6-865b-40ba131b62ad'}})  2025-11-01 12:49:37.342294 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:37.342304 | orchestrator | 2025-11-01 12:49:37.342330 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-11-01 12:49:37.342340 | orchestrator | Saturday 01 November 2025 12:49:32 +0000 (0:00:00.191) 0:00:12.452 ***** 2025-11-01 12:49:37.342349 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:49:37.342359 | orchestrator | 2025-11-01 12:49:37.342369 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-11-01 12:49:37.342378 | orchestrator | Saturday 01 November 2025 12:49:33 +0000 (0:00:00.179) 0:00:12.631 ***** 2025-11-01 12:49:37.342388 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:49:37.342397 | orchestrator | 2025-11-01 12:49:37.342406 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-11-01 12:49:37.342416 | orchestrator | Saturday 01 November 2025 12:49:33 +0000 (0:00:00.153) 0:00:12.785 ***** 2025-11-01 12:49:37.342425 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:37.342435 | orchestrator | 2025-11-01 12:49:37.342444 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-11-01 12:49:37.342454 | orchestrator | Saturday 01 November 2025 12:49:33 +0000 (0:00:00.153) 0:00:12.939 ***** 2025-11-01 12:49:37.342464 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:37.342473 | orchestrator | 2025-11-01 12:49:37.342490 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-11-01 12:49:37.342500 | orchestrator | Saturday 01 November 2025 12:49:33 +0000 (0:00:00.143) 0:00:13.082 ***** 2025-11-01 12:49:37.342509 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:37.342519 | orchestrator | 2025-11-01 12:49:37.342528 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-11-01 12:49:37.342538 | orchestrator | Saturday 01 November 2025 12:49:33 +0000 (0:00:00.141) 0:00:13.224 ***** 2025-11-01 12:49:37.342548 | orchestrator | ok: [testbed-node-3] => { 2025-11-01 12:49:37.342557 | orchestrator |  "ceph_osd_devices": { 2025-11-01 12:49:37.342567 | orchestrator |  "sdb": { 2025-11-01 12:49:37.342577 | orchestrator |  "osd_lvm_uuid": "d83d2135-3529-5759-9738-6f5d85bcdaef" 2025-11-01 12:49:37.342587 | orchestrator |  }, 2025-11-01 12:49:37.342596 | orchestrator |  "sdc": { 2025-11-01 12:49:37.342606 | orchestrator |  "osd_lvm_uuid": "2d34deeb-c147-51f6-865b-40ba131b62ad" 2025-11-01 12:49:37.342615 | orchestrator |  } 2025-11-01 12:49:37.342625 | orchestrator |  } 2025-11-01 12:49:37.342635 | orchestrator | } 2025-11-01 12:49:37.342644 | orchestrator | 2025-11-01 12:49:37.342654 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-11-01 12:49:37.342663 | orchestrator | Saturday 01 November 2025 12:49:33 +0000 (0:00:00.167) 0:00:13.391 ***** 2025-11-01 12:49:37.342673 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:37.342682 | orchestrator | 2025-11-01 12:49:37.342692 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-11-01 12:49:37.342701 | orchestrator | Saturday 01 November 2025 12:49:34 +0000 (0:00:00.193) 0:00:13.585 ***** 2025-11-01 12:49:37.342716 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:37.342726 | orchestrator | 2025-11-01 12:49:37.342736 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-11-01 12:49:37.342745 | orchestrator | Saturday 01 November 2025 12:49:34 +0000 (0:00:00.129) 0:00:13.714 ***** 2025-11-01 12:49:37.342755 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:49:37.342764 | orchestrator | 2025-11-01 12:49:37.342774 | orchestrator | TASK [Print configuration data] ************************************************ 2025-11-01 12:49:37.342783 | orchestrator | Saturday 01 November 2025 12:49:34 +0000 (0:00:00.155) 0:00:13.870 ***** 2025-11-01 12:49:37.342793 | orchestrator | changed: [testbed-node-3] => { 2025-11-01 12:49:37.342802 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-11-01 12:49:37.342812 | orchestrator |  "ceph_osd_devices": { 2025-11-01 12:49:37.342821 | orchestrator |  "sdb": { 2025-11-01 12:49:37.342831 | orchestrator |  "osd_lvm_uuid": "d83d2135-3529-5759-9738-6f5d85bcdaef" 2025-11-01 12:49:37.342841 | orchestrator |  }, 2025-11-01 12:49:37.342850 | orchestrator |  "sdc": { 2025-11-01 12:49:37.342860 | orchestrator |  "osd_lvm_uuid": "2d34deeb-c147-51f6-865b-40ba131b62ad" 2025-11-01 12:49:37.342869 | orchestrator |  } 2025-11-01 12:49:37.342879 | orchestrator |  }, 2025-11-01 12:49:37.342888 | orchestrator |  "lvm_volumes": [ 2025-11-01 12:49:37.342898 | orchestrator |  { 2025-11-01 12:49:37.342907 | orchestrator |  "data": "osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef", 2025-11-01 12:49:37.342917 | orchestrator |  "data_vg": "ceph-d83d2135-3529-5759-9738-6f5d85bcdaef" 2025-11-01 12:49:37.342927 | orchestrator |  }, 2025-11-01 12:49:37.342936 | orchestrator |  { 2025-11-01 12:49:37.342946 | orchestrator |  "data": "osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad", 2025-11-01 12:49:37.342955 | orchestrator |  "data_vg": "ceph-2d34deeb-c147-51f6-865b-40ba131b62ad" 2025-11-01 12:49:37.342965 | orchestrator |  } 2025-11-01 12:49:37.342974 | orchestrator |  ] 2025-11-01 12:49:37.342983 | orchestrator |  } 2025-11-01 12:49:37.342993 | orchestrator | } 2025-11-01 12:49:37.343003 | orchestrator | 2025-11-01 12:49:37.343012 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-11-01 12:49:37.343027 | orchestrator | Saturday 01 November 2025 12:49:34 +0000 (0:00:00.447) 0:00:14.317 ***** 2025-11-01 12:49:37.343037 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 12:49:37.343046 | orchestrator | 2025-11-01 12:49:37.343056 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-11-01 12:49:37.343065 | orchestrator | 2025-11-01 12:49:37.343075 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-01 12:49:37.343084 | orchestrator | Saturday 01 November 2025 12:49:36 +0000 (0:00:01.998) 0:00:16.316 ***** 2025-11-01 12:49:37.343094 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-11-01 12:49:37.343103 | orchestrator | 2025-11-01 12:49:37.343113 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-01 12:49:37.343122 | orchestrator | Saturday 01 November 2025 12:49:37 +0000 (0:00:00.271) 0:00:16.587 ***** 2025-11-01 12:49:37.343132 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:49:37.343141 | orchestrator | 2025-11-01 12:49:37.343151 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:37.343166 | orchestrator | Saturday 01 November 2025 12:49:37 +0000 (0:00:00.273) 0:00:16.861 ***** 2025-11-01 12:49:46.938460 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-11-01 12:49:46.938569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-11-01 12:49:46.938584 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-11-01 12:49:46.938596 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-11-01 12:49:46.938607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-11-01 12:49:46.938618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-11-01 12:49:46.938628 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-11-01 12:49:46.938639 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-11-01 12:49:46.938650 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-11-01 12:49:46.938661 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-11-01 12:49:46.938692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-11-01 12:49:46.938704 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-11-01 12:49:46.938715 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-11-01 12:49:46.938730 | orchestrator | 2025-11-01 12:49:46.938743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:46.938756 | orchestrator | Saturday 01 November 2025 12:49:37 +0000 (0:00:00.456) 0:00:17.317 ***** 2025-11-01 12:49:46.938767 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:46.938779 | orchestrator | 2025-11-01 12:49:46.938790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:46.938801 | orchestrator | Saturday 01 November 2025 12:49:38 +0000 (0:00:00.228) 0:00:17.546 ***** 2025-11-01 12:49:46.938812 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:46.938823 | orchestrator | 2025-11-01 12:49:46.938834 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:46.938845 | orchestrator | Saturday 01 November 2025 12:49:38 +0000 (0:00:00.196) 0:00:17.742 ***** 2025-11-01 12:49:46.938856 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:46.938867 | orchestrator | 2025-11-01 12:49:46.938878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:46.938888 | orchestrator | Saturday 01 November 2025 12:49:38 +0000 (0:00:00.194) 0:00:17.937 ***** 2025-11-01 12:49:46.938899 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:46.938933 | orchestrator | 2025-11-01 12:49:46.938945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:46.938956 | orchestrator | Saturday 01 November 2025 12:49:38 +0000 (0:00:00.236) 0:00:18.174 ***** 2025-11-01 12:49:46.938967 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:46.938978 | orchestrator | 2025-11-01 12:49:46.938989 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:46.939000 | orchestrator | Saturday 01 November 2025 12:49:39 +0000 (0:00:00.638) 0:00:18.812 ***** 2025-11-01 12:49:46.939013 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:46.939025 | orchestrator | 2025-11-01 12:49:46.939036 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:46.939049 | orchestrator | Saturday 01 November 2025 12:49:39 +0000 (0:00:00.217) 0:00:19.030 ***** 2025-11-01 12:49:46.939061 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:46.939074 | orchestrator | 2025-11-01 12:49:46.939086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:46.939099 | orchestrator | Saturday 01 November 2025 12:49:39 +0000 (0:00:00.243) 0:00:19.273 ***** 2025-11-01 12:49:46.939111 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:46.939124 | orchestrator | 2025-11-01 12:49:46.939136 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:46.939149 | orchestrator | Saturday 01 November 2025 12:49:39 +0000 (0:00:00.243) 0:00:19.517 ***** 2025-11-01 12:49:46.939161 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df) 2025-11-01 12:49:46.939175 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df) 2025-11-01 12:49:46.939216 | orchestrator | 2025-11-01 12:49:46.939229 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:46.939242 | orchestrator | Saturday 01 November 2025 12:49:40 +0000 (0:00:00.576) 0:00:20.093 ***** 2025-11-01 12:49:46.939254 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5ce69623-bff4-4254-af6b-7ef1616921db) 2025-11-01 12:49:46.939266 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5ce69623-bff4-4254-af6b-7ef1616921db) 2025-11-01 12:49:46.939278 | orchestrator | 2025-11-01 12:49:46.939291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:46.939303 | orchestrator | Saturday 01 November 2025 12:49:41 +0000 (0:00:00.544) 0:00:20.637 ***** 2025-11-01 12:49:46.939315 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0d74391b-0b8f-495c-a577-c6c4d7ebf805) 2025-11-01 12:49:46.939327 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0d74391b-0b8f-495c-a577-c6c4d7ebf805) 2025-11-01 12:49:46.939340 | orchestrator | 2025-11-01 12:49:46.939352 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:46.939363 | orchestrator | Saturday 01 November 2025 12:49:41 +0000 (0:00:00.518) 0:00:21.156 ***** 2025-11-01 12:49:46.939391 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ce385ad4-e039-43b9-b94b-c72aec6ecf03) 2025-11-01 12:49:46.939402 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ce385ad4-e039-43b9-b94b-c72aec6ecf03) 2025-11-01 12:49:46.939413 | orchestrator | 2025-11-01 12:49:46.939424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:46.939435 | orchestrator | Saturday 01 November 2025 12:49:42 +0000 (0:00:00.504) 0:00:21.660 ***** 2025-11-01 12:49:46.939446 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-01 12:49:46.939456 | orchestrator | 2025-11-01 12:49:46.939467 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:46.939484 | orchestrator | Saturday 01 November 2025 12:49:42 +0000 (0:00:00.351) 0:00:22.012 ***** 2025-11-01 12:49:46.939495 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-11-01 12:49:46.939514 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-11-01 12:49:46.939525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-11-01 12:49:46.939535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-11-01 12:49:46.939546 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-11-01 12:49:46.939556 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-11-01 12:49:46.939567 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-11-01 12:49:46.939578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-11-01 12:49:46.939588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-11-01 12:49:46.939599 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-11-01 12:49:46.939610 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-11-01 12:49:46.939621 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-11-01 12:49:46.939631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-11-01 12:49:46.939642 | orchestrator | 2025-11-01 12:49:46.939653 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:46.939663 | orchestrator | Saturday 01 November 2025 12:49:43 +0000 (0:00:00.522) 0:00:22.534 ***** 2025-11-01 12:49:46.939674 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:46.939685 | orchestrator | 2025-11-01 12:49:46.939696 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:46.939706 | orchestrator | Saturday 01 November 2025 12:49:43 +0000 (0:00:00.843) 0:00:23.378 ***** 2025-11-01 12:49:46.939717 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:46.939728 | orchestrator | 2025-11-01 12:49:46.939739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:46.939750 | orchestrator | Saturday 01 November 2025 12:49:44 +0000 (0:00:00.277) 0:00:23.655 ***** 2025-11-01 12:49:46.939760 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:46.939771 | orchestrator | 2025-11-01 12:49:46.939782 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:46.939793 | orchestrator | Saturday 01 November 2025 12:49:44 +0000 (0:00:00.233) 0:00:23.888 ***** 2025-11-01 12:49:46.939803 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:46.939814 | orchestrator | 2025-11-01 12:49:46.939825 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:46.939836 | orchestrator | Saturday 01 November 2025 12:49:44 +0000 (0:00:00.239) 0:00:24.128 ***** 2025-11-01 12:49:46.939847 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:46.939858 | orchestrator | 2025-11-01 12:49:46.939868 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:46.939879 | orchestrator | Saturday 01 November 2025 12:49:44 +0000 (0:00:00.244) 0:00:24.373 ***** 2025-11-01 12:49:46.939890 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:46.939901 | orchestrator | 2025-11-01 12:49:46.939911 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:46.939922 | orchestrator | Saturday 01 November 2025 12:49:45 +0000 (0:00:00.248) 0:00:24.622 ***** 2025-11-01 12:49:46.939932 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:46.939943 | orchestrator | 2025-11-01 12:49:46.939954 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:46.939965 | orchestrator | Saturday 01 November 2025 12:49:45 +0000 (0:00:00.274) 0:00:24.897 ***** 2025-11-01 12:49:46.939976 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:46.939986 | orchestrator | 2025-11-01 12:49:46.939997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:46.940014 | orchestrator | Saturday 01 November 2025 12:49:45 +0000 (0:00:00.226) 0:00:25.123 ***** 2025-11-01 12:49:46.940025 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-11-01 12:49:46.940037 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-11-01 12:49:46.940048 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-11-01 12:49:46.940058 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-11-01 12:49:46.940069 | orchestrator | 2025-11-01 12:49:46.940080 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:46.940091 | orchestrator | Saturday 01 November 2025 12:49:46 +0000 (0:00:01.078) 0:00:26.201 ***** 2025-11-01 12:49:46.940101 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:46.940112 | orchestrator | 2025-11-01 12:49:46.940129 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:54.083247 | orchestrator | Saturday 01 November 2025 12:49:46 +0000 (0:00:00.251) 0:00:26.452 ***** 2025-11-01 12:49:54.083353 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:54.083369 | orchestrator | 2025-11-01 12:49:54.083381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:54.083392 | orchestrator | Saturday 01 November 2025 12:49:47 +0000 (0:00:00.245) 0:00:26.697 ***** 2025-11-01 12:49:54.083403 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:54.083414 | orchestrator | 2025-11-01 12:49:54.083425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:49:54.083436 | orchestrator | Saturday 01 November 2025 12:49:47 +0000 (0:00:00.236) 0:00:26.934 ***** 2025-11-01 12:49:54.083446 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:54.083457 | orchestrator | 2025-11-01 12:49:54.083484 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-11-01 12:49:54.083496 | orchestrator | Saturday 01 November 2025 12:49:48 +0000 (0:00:00.805) 0:00:27.739 ***** 2025-11-01 12:49:54.083507 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-11-01 12:49:54.083517 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-11-01 12:49:54.083528 | orchestrator | 2025-11-01 12:49:54.083538 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-11-01 12:49:54.083549 | orchestrator | Saturday 01 November 2025 12:49:48 +0000 (0:00:00.200) 0:00:27.940 ***** 2025-11-01 12:49:54.083559 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:54.083570 | orchestrator | 2025-11-01 12:49:54.083581 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-11-01 12:49:54.083592 | orchestrator | Saturday 01 November 2025 12:49:48 +0000 (0:00:00.145) 0:00:28.086 ***** 2025-11-01 12:49:54.083602 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:54.083613 | orchestrator | 2025-11-01 12:49:54.083623 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-11-01 12:49:54.083634 | orchestrator | Saturday 01 November 2025 12:49:48 +0000 (0:00:00.133) 0:00:28.220 ***** 2025-11-01 12:49:54.083644 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:54.083655 | orchestrator | 2025-11-01 12:49:54.083665 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-11-01 12:49:54.083676 | orchestrator | Saturday 01 November 2025 12:49:48 +0000 (0:00:00.126) 0:00:28.347 ***** 2025-11-01 12:49:54.083687 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:49:54.083698 | orchestrator | 2025-11-01 12:49:54.083709 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-11-01 12:49:54.083719 | orchestrator | Saturday 01 November 2025 12:49:48 +0000 (0:00:00.133) 0:00:28.480 ***** 2025-11-01 12:49:54.083730 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '277f9d3d-0c20-556e-833f-7bea0f2408d1'}}) 2025-11-01 12:49:54.083742 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '780930f3-bf13-5252-a15a-5f9f469ca774'}}) 2025-11-01 12:49:54.083755 | orchestrator | 2025-11-01 12:49:54.083768 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-11-01 12:49:54.083803 | orchestrator | Saturday 01 November 2025 12:49:49 +0000 (0:00:00.172) 0:00:28.653 ***** 2025-11-01 12:49:54.083817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '277f9d3d-0c20-556e-833f-7bea0f2408d1'}})  2025-11-01 12:49:54.083831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '780930f3-bf13-5252-a15a-5f9f469ca774'}})  2025-11-01 12:49:54.083843 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:54.083855 | orchestrator | 2025-11-01 12:49:54.083868 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-11-01 12:49:54.083880 | orchestrator | Saturday 01 November 2025 12:49:49 +0000 (0:00:00.147) 0:00:28.800 ***** 2025-11-01 12:49:54.083893 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '277f9d3d-0c20-556e-833f-7bea0f2408d1'}})  2025-11-01 12:49:54.083905 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '780930f3-bf13-5252-a15a-5f9f469ca774'}})  2025-11-01 12:49:54.083917 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:54.083930 | orchestrator | 2025-11-01 12:49:54.083942 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-11-01 12:49:54.083955 | orchestrator | Saturday 01 November 2025 12:49:49 +0000 (0:00:00.168) 0:00:28.969 ***** 2025-11-01 12:49:54.083967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '277f9d3d-0c20-556e-833f-7bea0f2408d1'}})  2025-11-01 12:49:54.083977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '780930f3-bf13-5252-a15a-5f9f469ca774'}})  2025-11-01 12:49:54.083989 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:54.083999 | orchestrator | 2025-11-01 12:49:54.084010 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-11-01 12:49:54.084021 | orchestrator | Saturday 01 November 2025 12:49:49 +0000 (0:00:00.139) 0:00:29.109 ***** 2025-11-01 12:49:54.084031 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:49:54.084042 | orchestrator | 2025-11-01 12:49:54.084052 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-11-01 12:49:54.084063 | orchestrator | Saturday 01 November 2025 12:49:49 +0000 (0:00:00.139) 0:00:29.249 ***** 2025-11-01 12:49:54.084073 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:49:54.084084 | orchestrator | 2025-11-01 12:49:54.084094 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-11-01 12:49:54.084105 | orchestrator | Saturday 01 November 2025 12:49:49 +0000 (0:00:00.150) 0:00:29.399 ***** 2025-11-01 12:49:54.084115 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:54.084126 | orchestrator | 2025-11-01 12:49:54.084152 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-11-01 12:49:54.084163 | orchestrator | Saturday 01 November 2025 12:49:50 +0000 (0:00:00.273) 0:00:29.672 ***** 2025-11-01 12:49:54.084174 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:54.084202 | orchestrator | 2025-11-01 12:49:54.084214 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-11-01 12:49:54.084225 | orchestrator | Saturday 01 November 2025 12:49:50 +0000 (0:00:00.113) 0:00:29.786 ***** 2025-11-01 12:49:54.084235 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:54.084246 | orchestrator | 2025-11-01 12:49:54.084256 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-11-01 12:49:54.084267 | orchestrator | Saturday 01 November 2025 12:49:50 +0000 (0:00:00.132) 0:00:29.918 ***** 2025-11-01 12:49:54.084278 | orchestrator | ok: [testbed-node-4] => { 2025-11-01 12:49:54.084289 | orchestrator |  "ceph_osd_devices": { 2025-11-01 12:49:54.084299 | orchestrator |  "sdb": { 2025-11-01 12:49:54.084310 | orchestrator |  "osd_lvm_uuid": "277f9d3d-0c20-556e-833f-7bea0f2408d1" 2025-11-01 12:49:54.084321 | orchestrator |  }, 2025-11-01 12:49:54.084331 | orchestrator |  "sdc": { 2025-11-01 12:49:54.084351 | orchestrator |  "osd_lvm_uuid": "780930f3-bf13-5252-a15a-5f9f469ca774" 2025-11-01 12:49:54.084362 | orchestrator |  } 2025-11-01 12:49:54.084372 | orchestrator |  } 2025-11-01 12:49:54.084383 | orchestrator | } 2025-11-01 12:49:54.084394 | orchestrator | 2025-11-01 12:49:54.084405 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-11-01 12:49:54.084415 | orchestrator | Saturday 01 November 2025 12:49:50 +0000 (0:00:00.139) 0:00:30.058 ***** 2025-11-01 12:49:54.084426 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:54.084437 | orchestrator | 2025-11-01 12:49:54.084454 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-11-01 12:49:54.084465 | orchestrator | Saturday 01 November 2025 12:49:50 +0000 (0:00:00.123) 0:00:30.182 ***** 2025-11-01 12:49:54.084476 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:54.084486 | orchestrator | 2025-11-01 12:49:54.084497 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-11-01 12:49:54.084507 | orchestrator | Saturday 01 November 2025 12:49:50 +0000 (0:00:00.190) 0:00:30.372 ***** 2025-11-01 12:49:54.084518 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:49:54.084529 | orchestrator | 2025-11-01 12:49:54.084539 | orchestrator | TASK [Print configuration data] ************************************************ 2025-11-01 12:49:54.084550 | orchestrator | Saturday 01 November 2025 12:49:51 +0000 (0:00:00.159) 0:00:30.532 ***** 2025-11-01 12:49:54.084560 | orchestrator | changed: [testbed-node-4] => { 2025-11-01 12:49:54.084571 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-11-01 12:49:54.084581 | orchestrator |  "ceph_osd_devices": { 2025-11-01 12:49:54.084592 | orchestrator |  "sdb": { 2025-11-01 12:49:54.084602 | orchestrator |  "osd_lvm_uuid": "277f9d3d-0c20-556e-833f-7bea0f2408d1" 2025-11-01 12:49:54.084617 | orchestrator |  }, 2025-11-01 12:49:54.084628 | orchestrator |  "sdc": { 2025-11-01 12:49:54.084639 | orchestrator |  "osd_lvm_uuid": "780930f3-bf13-5252-a15a-5f9f469ca774" 2025-11-01 12:49:54.084650 | orchestrator |  } 2025-11-01 12:49:54.084660 | orchestrator |  }, 2025-11-01 12:49:54.084671 | orchestrator |  "lvm_volumes": [ 2025-11-01 12:49:54.084681 | orchestrator |  { 2025-11-01 12:49:54.084692 | orchestrator |  "data": "osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1", 2025-11-01 12:49:54.084703 | orchestrator |  "data_vg": "ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1" 2025-11-01 12:49:54.084713 | orchestrator |  }, 2025-11-01 12:49:54.084724 | orchestrator |  { 2025-11-01 12:49:54.084734 | orchestrator |  "data": "osd-block-780930f3-bf13-5252-a15a-5f9f469ca774", 2025-11-01 12:49:54.084745 | orchestrator |  "data_vg": "ceph-780930f3-bf13-5252-a15a-5f9f469ca774" 2025-11-01 12:49:54.084756 | orchestrator |  } 2025-11-01 12:49:54.084766 | orchestrator |  ] 2025-11-01 12:49:54.084777 | orchestrator |  } 2025-11-01 12:49:54.084787 | orchestrator | } 2025-11-01 12:49:54.084798 | orchestrator | 2025-11-01 12:49:54.084808 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-11-01 12:49:54.084819 | orchestrator | Saturday 01 November 2025 12:49:51 +0000 (0:00:00.253) 0:00:30.785 ***** 2025-11-01 12:49:54.084830 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-11-01 12:49:54.084840 | orchestrator | 2025-11-01 12:49:54.084851 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-11-01 12:49:54.084862 | orchestrator | 2025-11-01 12:49:54.084872 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-01 12:49:54.084883 | orchestrator | Saturday 01 November 2025 12:49:52 +0000 (0:00:01.248) 0:00:32.033 ***** 2025-11-01 12:49:54.084894 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-11-01 12:49:54.084904 | orchestrator | 2025-11-01 12:49:54.084915 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-01 12:49:54.084925 | orchestrator | Saturday 01 November 2025 12:49:53 +0000 (0:00:00.776) 0:00:32.809 ***** 2025-11-01 12:49:54.084943 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:49:54.084953 | orchestrator | 2025-11-01 12:49:54.084964 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:49:54.084975 | orchestrator | Saturday 01 November 2025 12:49:53 +0000 (0:00:00.273) 0:00:33.083 ***** 2025-11-01 12:49:54.084985 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-11-01 12:49:54.084996 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-11-01 12:49:54.085006 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-11-01 12:49:54.085017 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-11-01 12:49:54.085027 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-11-01 12:49:54.085038 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-11-01 12:49:54.085054 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-11-01 12:50:04.010296 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-11-01 12:50:04.010385 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-11-01 12:50:04.010399 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-11-01 12:50:04.010411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-11-01 12:50:04.010422 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-11-01 12:50:04.010433 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-11-01 12:50:04.010444 | orchestrator | 2025-11-01 12:50:04.010456 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:50:04.010468 | orchestrator | Saturday 01 November 2025 12:49:54 +0000 (0:00:00.513) 0:00:33.596 ***** 2025-11-01 12:50:04.010479 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:04.010491 | orchestrator | 2025-11-01 12:50:04.010502 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:50:04.010513 | orchestrator | Saturday 01 November 2025 12:49:54 +0000 (0:00:00.288) 0:00:33.884 ***** 2025-11-01 12:50:04.010524 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:04.010535 | orchestrator | 2025-11-01 12:50:04.010546 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:50:04.010557 | orchestrator | Saturday 01 November 2025 12:49:54 +0000 (0:00:00.260) 0:00:34.145 ***** 2025-11-01 12:50:04.010568 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:04.010579 | orchestrator | 2025-11-01 12:50:04.010590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:50:04.010601 | orchestrator | Saturday 01 November 2025 12:49:54 +0000 (0:00:00.247) 0:00:34.393 ***** 2025-11-01 12:50:04.010611 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:04.010622 | orchestrator | 2025-11-01 12:50:04.010633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:50:04.010644 | orchestrator | Saturday 01 November 2025 12:49:55 +0000 (0:00:00.228) 0:00:34.622 ***** 2025-11-01 12:50:04.010655 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:04.010666 | orchestrator | 2025-11-01 12:50:04.010677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:50:04.010688 | orchestrator | Saturday 01 November 2025 12:49:55 +0000 (0:00:00.245) 0:00:34.867 ***** 2025-11-01 12:50:04.010699 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:04.010710 | orchestrator | 2025-11-01 12:50:04.010721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:50:04.010732 | orchestrator | Saturday 01 November 2025 12:49:55 +0000 (0:00:00.252) 0:00:35.120 ***** 2025-11-01 12:50:04.010743 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:04.010775 | orchestrator | 2025-11-01 12:50:04.010786 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:50:04.010797 | orchestrator | Saturday 01 November 2025 12:49:55 +0000 (0:00:00.301) 0:00:35.421 ***** 2025-11-01 12:50:04.010808 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:04.010821 | orchestrator | 2025-11-01 12:50:04.010846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:50:04.010860 | orchestrator | Saturday 01 November 2025 12:49:56 +0000 (0:00:00.249) 0:00:35.670 ***** 2025-11-01 12:50:04.010873 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a) 2025-11-01 12:50:04.010887 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a) 2025-11-01 12:50:04.010899 | orchestrator | 2025-11-01 12:50:04.010912 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:50:04.010924 | orchestrator | Saturday 01 November 2025 12:49:57 +0000 (0:00:01.037) 0:00:36.708 ***** 2025-11-01 12:50:04.010937 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7bcb822d-7f3b-4eca-ac83-a3c6a6b727aa) 2025-11-01 12:50:04.010950 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7bcb822d-7f3b-4eca-ac83-a3c6a6b727aa) 2025-11-01 12:50:04.010962 | orchestrator | 2025-11-01 12:50:04.010976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:50:04.010988 | orchestrator | Saturday 01 November 2025 12:49:57 +0000 (0:00:00.525) 0:00:37.233 ***** 2025-11-01 12:50:04.011001 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bacac2a1-f096-4371-9863-988edf40b0d8) 2025-11-01 12:50:04.011014 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bacac2a1-f096-4371-9863-988edf40b0d8) 2025-11-01 12:50:04.011026 | orchestrator | 2025-11-01 12:50:04.011039 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:50:04.011051 | orchestrator | Saturday 01 November 2025 12:49:58 +0000 (0:00:00.501) 0:00:37.735 ***** 2025-11-01 12:50:04.011064 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_79b0442c-a1d2-4926-aa81-9c91c373f6dc) 2025-11-01 12:50:04.011076 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_79b0442c-a1d2-4926-aa81-9c91c373f6dc) 2025-11-01 12:50:04.011089 | orchestrator | 2025-11-01 12:50:04.011101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:50:04.011113 | orchestrator | Saturday 01 November 2025 12:49:58 +0000 (0:00:00.513) 0:00:38.248 ***** 2025-11-01 12:50:04.011126 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-01 12:50:04.011138 | orchestrator | 2025-11-01 12:50:04.011151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:50:04.011164 | orchestrator | Saturday 01 November 2025 12:49:59 +0000 (0:00:00.396) 0:00:38.644 ***** 2025-11-01 12:50:04.011204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-11-01 12:50:04.011217 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-11-01 12:50:04.011227 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-11-01 12:50:04.011238 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-11-01 12:50:04.011249 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-11-01 12:50:04.011260 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-11-01 12:50:04.011271 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-11-01 12:50:04.011281 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-11-01 12:50:04.011293 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-11-01 12:50:04.011312 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-11-01 12:50:04.011323 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-11-01 12:50:04.011334 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-11-01 12:50:04.011345 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-11-01 12:50:04.011356 | orchestrator | 2025-11-01 12:50:04.011367 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:50:04.011377 | orchestrator | Saturday 01 November 2025 12:49:59 +0000 (0:00:00.513) 0:00:39.158 ***** 2025-11-01 12:50:04.011388 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:04.011399 | orchestrator | 2025-11-01 12:50:04.011410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:50:04.011421 | orchestrator | Saturday 01 November 2025 12:49:59 +0000 (0:00:00.235) 0:00:39.394 ***** 2025-11-01 12:50:04.011432 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:04.011443 | orchestrator | 2025-11-01 12:50:04.011454 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:50:04.011465 | orchestrator | Saturday 01 November 2025 12:50:00 +0000 (0:00:00.232) 0:00:39.626 ***** 2025-11-01 12:50:04.011476 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:04.011487 | orchestrator | 2025-11-01 12:50:04.011498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:50:04.011508 | orchestrator | Saturday 01 November 2025 12:50:00 +0000 (0:00:00.273) 0:00:39.900 ***** 2025-11-01 12:50:04.011519 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:04.011530 | orchestrator | 2025-11-01 12:50:04.011541 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:50:04.011552 | orchestrator | Saturday 01 November 2025 12:50:00 +0000 (0:00:00.281) 0:00:40.182 ***** 2025-11-01 12:50:04.011563 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:04.011574 | orchestrator | 2025-11-01 12:50:04.011585 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:50:04.011596 | orchestrator | Saturday 01 November 2025 12:50:00 +0000 (0:00:00.256) 0:00:40.438 ***** 2025-11-01 12:50:04.011607 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:04.011618 | orchestrator | 2025-11-01 12:50:04.011629 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:50:04.011640 | orchestrator | Saturday 01 November 2025 12:50:01 +0000 (0:00:00.788) 0:00:41.227 ***** 2025-11-01 12:50:04.011650 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:04.011661 | orchestrator | 2025-11-01 12:50:04.011672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:50:04.011683 | orchestrator | Saturday 01 November 2025 12:50:01 +0000 (0:00:00.242) 0:00:41.469 ***** 2025-11-01 12:50:04.011694 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:04.011705 | orchestrator | 2025-11-01 12:50:04.011716 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:50:04.011727 | orchestrator | Saturday 01 November 2025 12:50:02 +0000 (0:00:00.214) 0:00:41.684 ***** 2025-11-01 12:50:04.011738 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-11-01 12:50:04.011749 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-11-01 12:50:04.011760 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-11-01 12:50:04.011771 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-11-01 12:50:04.011782 | orchestrator | 2025-11-01 12:50:04.011792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:50:04.011804 | orchestrator | Saturday 01 November 2025 12:50:03 +0000 (0:00:00.849) 0:00:42.533 ***** 2025-11-01 12:50:04.011815 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:04.011825 | orchestrator | 2025-11-01 12:50:04.011836 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:50:04.011853 | orchestrator | Saturday 01 November 2025 12:50:03 +0000 (0:00:00.223) 0:00:42.757 ***** 2025-11-01 12:50:04.011864 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:04.011875 | orchestrator | 2025-11-01 12:50:04.011886 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:50:04.011897 | orchestrator | Saturday 01 November 2025 12:50:03 +0000 (0:00:00.243) 0:00:43.000 ***** 2025-11-01 12:50:04.011908 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:04.011919 | orchestrator | 2025-11-01 12:50:04.011930 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:50:04.011941 | orchestrator | Saturday 01 November 2025 12:50:03 +0000 (0:00:00.244) 0:00:43.244 ***** 2025-11-01 12:50:04.011957 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:04.011969 | orchestrator | 2025-11-01 12:50:04.011980 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-11-01 12:50:04.011997 | orchestrator | Saturday 01 November 2025 12:50:04 +0000 (0:00:00.286) 0:00:43.531 ***** 2025-11-01 12:50:09.519043 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-11-01 12:50:09.519133 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-11-01 12:50:09.519148 | orchestrator | 2025-11-01 12:50:09.519160 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-11-01 12:50:09.519171 | orchestrator | Saturday 01 November 2025 12:50:04 +0000 (0:00:00.256) 0:00:43.788 ***** 2025-11-01 12:50:09.519244 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:09.519257 | orchestrator | 2025-11-01 12:50:09.519269 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-11-01 12:50:09.519279 | orchestrator | Saturday 01 November 2025 12:50:04 +0000 (0:00:00.156) 0:00:43.945 ***** 2025-11-01 12:50:09.519290 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:09.519301 | orchestrator | 2025-11-01 12:50:09.519312 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-11-01 12:50:09.519323 | orchestrator | Saturday 01 November 2025 12:50:04 +0000 (0:00:00.133) 0:00:44.078 ***** 2025-11-01 12:50:09.519333 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:09.519344 | orchestrator | 2025-11-01 12:50:09.519355 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-11-01 12:50:09.519365 | orchestrator | Saturday 01 November 2025 12:50:04 +0000 (0:00:00.392) 0:00:44.470 ***** 2025-11-01 12:50:09.519376 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:50:09.519387 | orchestrator | 2025-11-01 12:50:09.519398 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-11-01 12:50:09.519409 | orchestrator | Saturday 01 November 2025 12:50:05 +0000 (0:00:00.294) 0:00:44.765 ***** 2025-11-01 12:50:09.519421 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fea132eb-9454-553c-8b4e-faa263198857'}}) 2025-11-01 12:50:09.519432 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e995aa1-0e3d-5a0e-8d57-e00715a81a73'}}) 2025-11-01 12:50:09.519443 | orchestrator | 2025-11-01 12:50:09.519453 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-11-01 12:50:09.519464 | orchestrator | Saturday 01 November 2025 12:50:05 +0000 (0:00:00.287) 0:00:45.053 ***** 2025-11-01 12:50:09.519476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fea132eb-9454-553c-8b4e-faa263198857'}})  2025-11-01 12:50:09.519487 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e995aa1-0e3d-5a0e-8d57-e00715a81a73'}})  2025-11-01 12:50:09.519498 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:09.519509 | orchestrator | 2025-11-01 12:50:09.519532 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-11-01 12:50:09.519543 | orchestrator | Saturday 01 November 2025 12:50:05 +0000 (0:00:00.326) 0:00:45.380 ***** 2025-11-01 12:50:09.519554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fea132eb-9454-553c-8b4e-faa263198857'}})  2025-11-01 12:50:09.519584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e995aa1-0e3d-5a0e-8d57-e00715a81a73'}})  2025-11-01 12:50:09.519596 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:09.519607 | orchestrator | 2025-11-01 12:50:09.519619 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-11-01 12:50:09.519632 | orchestrator | Saturday 01 November 2025 12:50:06 +0000 (0:00:00.220) 0:00:45.601 ***** 2025-11-01 12:50:09.519645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fea132eb-9454-553c-8b4e-faa263198857'}})  2025-11-01 12:50:09.519657 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e995aa1-0e3d-5a0e-8d57-e00715a81a73'}})  2025-11-01 12:50:09.519669 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:09.519681 | orchestrator | 2025-11-01 12:50:09.519694 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-11-01 12:50:09.519706 | orchestrator | Saturday 01 November 2025 12:50:06 +0000 (0:00:00.180) 0:00:45.781 ***** 2025-11-01 12:50:09.519718 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:50:09.519730 | orchestrator | 2025-11-01 12:50:09.519743 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-11-01 12:50:09.519755 | orchestrator | Saturday 01 November 2025 12:50:06 +0000 (0:00:00.166) 0:00:45.948 ***** 2025-11-01 12:50:09.519768 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:50:09.519780 | orchestrator | 2025-11-01 12:50:09.519792 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-11-01 12:50:09.519805 | orchestrator | Saturday 01 November 2025 12:50:06 +0000 (0:00:00.197) 0:00:46.146 ***** 2025-11-01 12:50:09.519817 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:09.519829 | orchestrator | 2025-11-01 12:50:09.519842 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-11-01 12:50:09.519854 | orchestrator | Saturday 01 November 2025 12:50:06 +0000 (0:00:00.249) 0:00:46.396 ***** 2025-11-01 12:50:09.519866 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:09.519878 | orchestrator | 2025-11-01 12:50:09.519890 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-11-01 12:50:09.519903 | orchestrator | Saturday 01 November 2025 12:50:07 +0000 (0:00:00.164) 0:00:46.561 ***** 2025-11-01 12:50:09.519915 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:09.519927 | orchestrator | 2025-11-01 12:50:09.519940 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-11-01 12:50:09.519953 | orchestrator | Saturday 01 November 2025 12:50:07 +0000 (0:00:00.156) 0:00:46.718 ***** 2025-11-01 12:50:09.519965 | orchestrator | ok: [testbed-node-5] => { 2025-11-01 12:50:09.519975 | orchestrator |  "ceph_osd_devices": { 2025-11-01 12:50:09.519986 | orchestrator |  "sdb": { 2025-11-01 12:50:09.519997 | orchestrator |  "osd_lvm_uuid": "fea132eb-9454-553c-8b4e-faa263198857" 2025-11-01 12:50:09.520023 | orchestrator |  }, 2025-11-01 12:50:09.520034 | orchestrator |  "sdc": { 2025-11-01 12:50:09.520045 | orchestrator |  "osd_lvm_uuid": "1e995aa1-0e3d-5a0e-8d57-e00715a81a73" 2025-11-01 12:50:09.520056 | orchestrator |  } 2025-11-01 12:50:09.520066 | orchestrator |  } 2025-11-01 12:50:09.520077 | orchestrator | } 2025-11-01 12:50:09.520088 | orchestrator | 2025-11-01 12:50:09.520098 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-11-01 12:50:09.520109 | orchestrator | Saturday 01 November 2025 12:50:07 +0000 (0:00:00.176) 0:00:46.894 ***** 2025-11-01 12:50:09.520120 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:09.520131 | orchestrator | 2025-11-01 12:50:09.520141 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-11-01 12:50:09.520152 | orchestrator | Saturday 01 November 2025 12:50:07 +0000 (0:00:00.146) 0:00:47.040 ***** 2025-11-01 12:50:09.520163 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:09.520174 | orchestrator | 2025-11-01 12:50:09.520203 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-11-01 12:50:09.520221 | orchestrator | Saturday 01 November 2025 12:50:07 +0000 (0:00:00.399) 0:00:47.440 ***** 2025-11-01 12:50:09.520232 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:50:09.520242 | orchestrator | 2025-11-01 12:50:09.520253 | orchestrator | TASK [Print configuration data] ************************************************ 2025-11-01 12:50:09.520264 | orchestrator | Saturday 01 November 2025 12:50:08 +0000 (0:00:00.165) 0:00:47.605 ***** 2025-11-01 12:50:09.520275 | orchestrator | changed: [testbed-node-5] => { 2025-11-01 12:50:09.520285 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-11-01 12:50:09.520296 | orchestrator |  "ceph_osd_devices": { 2025-11-01 12:50:09.520307 | orchestrator |  "sdb": { 2025-11-01 12:50:09.520318 | orchestrator |  "osd_lvm_uuid": "fea132eb-9454-553c-8b4e-faa263198857" 2025-11-01 12:50:09.520329 | orchestrator |  }, 2025-11-01 12:50:09.520339 | orchestrator |  "sdc": { 2025-11-01 12:50:09.520350 | orchestrator |  "osd_lvm_uuid": "1e995aa1-0e3d-5a0e-8d57-e00715a81a73" 2025-11-01 12:50:09.520361 | orchestrator |  } 2025-11-01 12:50:09.520371 | orchestrator |  }, 2025-11-01 12:50:09.520382 | orchestrator |  "lvm_volumes": [ 2025-11-01 12:50:09.520392 | orchestrator |  { 2025-11-01 12:50:09.520403 | orchestrator |  "data": "osd-block-fea132eb-9454-553c-8b4e-faa263198857", 2025-11-01 12:50:09.520414 | orchestrator |  "data_vg": "ceph-fea132eb-9454-553c-8b4e-faa263198857" 2025-11-01 12:50:09.520424 | orchestrator |  }, 2025-11-01 12:50:09.520435 | orchestrator |  { 2025-11-01 12:50:09.520445 | orchestrator |  "data": "osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73", 2025-11-01 12:50:09.520456 | orchestrator |  "data_vg": "ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73" 2025-11-01 12:50:09.520467 | orchestrator |  } 2025-11-01 12:50:09.520477 | orchestrator |  ] 2025-11-01 12:50:09.520488 | orchestrator |  } 2025-11-01 12:50:09.520502 | orchestrator | } 2025-11-01 12:50:09.520513 | orchestrator | 2025-11-01 12:50:09.520524 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-11-01 12:50:09.520535 | orchestrator | Saturday 01 November 2025 12:50:08 +0000 (0:00:00.229) 0:00:47.834 ***** 2025-11-01 12:50:09.520546 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-11-01 12:50:09.520557 | orchestrator | 2025-11-01 12:50:09.520568 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:50:09.520592 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-01 12:50:09.520604 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-01 12:50:09.520615 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-01 12:50:09.520626 | orchestrator | 2025-11-01 12:50:09.520637 | orchestrator | 2025-11-01 12:50:09.520647 | orchestrator | 2025-11-01 12:50:09.520658 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:50:09.520669 | orchestrator | Saturday 01 November 2025 12:50:09 +0000 (0:00:01.191) 0:00:49.026 ***** 2025-11-01 12:50:09.520680 | orchestrator | =============================================================================== 2025-11-01 12:50:09.520690 | orchestrator | Write configuration file ------------------------------------------------ 4.44s 2025-11-01 12:50:09.520701 | orchestrator | Add known links to the list of available block devices ------------------ 1.50s 2025-11-01 12:50:09.520712 | orchestrator | Add known partitions to the list of available block devices ------------- 1.45s 2025-11-01 12:50:09.520722 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.31s 2025-11-01 12:50:09.520733 | orchestrator | Add known partitions to the list of available block devices ------------- 1.22s 2025-11-01 12:50:09.520751 | orchestrator | Add known partitions to the list of available block devices ------------- 1.08s 2025-11-01 12:50:09.520762 | orchestrator | Add known links to the list of available block devices ------------------ 1.04s 2025-11-01 12:50:09.520772 | orchestrator | Add known links to the list of available block devices ------------------ 1.04s 2025-11-01 12:50:09.520783 | orchestrator | Print configuration data ------------------------------------------------ 0.93s 2025-11-01 12:50:09.520794 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2025-11-01 12:50:09.520804 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-11-01 12:50:09.520815 | orchestrator | Get initial list of available block devices ----------------------------- 0.82s 2025-11-01 12:50:09.520826 | orchestrator | Add known partitions to the list of available block devices ------------- 0.81s 2025-11-01 12:50:09.520837 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2025-11-01 12:50:09.520854 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.79s 2025-11-01 12:50:09.949078 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2025-11-01 12:50:09.949135 | orchestrator | Print DB devices -------------------------------------------------------- 0.72s 2025-11-01 12:50:09.949147 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-11-01 12:50:09.949156 | orchestrator | Set DB devices config data ---------------------------------------------- 0.68s 2025-11-01 12:50:09.949166 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.65s 2025-11-01 12:50:32.915460 | orchestrator | 2025-11-01 12:50:32 | INFO  | Task d69b6d02-71cd-4a16-9bf3-5b195fbbf82e (sync inventory) is running in background. Output coming soon. 2025-11-01 12:51:06.968400 | orchestrator | 2025-11-01 12:50:34 | INFO  | Starting group_vars file reorganization 2025-11-01 12:51:06.969092 | orchestrator | 2025-11-01 12:50:34 | INFO  | Moved 0 file(s) to their respective directories 2025-11-01 12:51:06.969121 | orchestrator | 2025-11-01 12:50:34 | INFO  | Group_vars file reorganization completed 2025-11-01 12:51:06.969135 | orchestrator | 2025-11-01 12:50:37 | INFO  | Starting variable preparation from inventory 2025-11-01 12:51:06.969146 | orchestrator | 2025-11-01 12:50:41 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-11-01 12:51:06.969158 | orchestrator | 2025-11-01 12:50:42 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-11-01 12:51:06.969169 | orchestrator | 2025-11-01 12:50:42 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-11-01 12:51:06.969203 | orchestrator | 2025-11-01 12:50:42 | INFO  | 3 file(s) written, 6 host(s) processed 2025-11-01 12:51:06.969216 | orchestrator | 2025-11-01 12:50:42 | INFO  | Variable preparation completed 2025-11-01 12:51:06.969228 | orchestrator | 2025-11-01 12:50:43 | INFO  | Starting inventory overwrite handling 2025-11-01 12:51:06.969240 | orchestrator | 2025-11-01 12:50:43 | INFO  | Handling group overwrites in 99-overwrite 2025-11-01 12:51:06.969253 | orchestrator | 2025-11-01 12:50:43 | INFO  | Removing group frr:children from 60-generic 2025-11-01 12:51:06.969264 | orchestrator | 2025-11-01 12:50:43 | INFO  | Removing group storage:children from 50-kolla 2025-11-01 12:51:06.969274 | orchestrator | 2025-11-01 12:50:43 | INFO  | Removing group netbird:children from 50-infrastructure 2025-11-01 12:51:06.969286 | orchestrator | 2025-11-01 12:50:43 | INFO  | Removing group ceph-rgw from 50-ceph 2025-11-01 12:51:06.969298 | orchestrator | 2025-11-01 12:50:43 | INFO  | Removing group ceph-mds from 50-ceph 2025-11-01 12:51:06.969309 | orchestrator | 2025-11-01 12:50:43 | INFO  | Handling group overwrites in 20-roles 2025-11-01 12:51:06.969321 | orchestrator | 2025-11-01 12:50:43 | INFO  | Removing group k3s_node from 50-infrastructure 2025-11-01 12:51:06.969352 | orchestrator | 2025-11-01 12:50:43 | INFO  | Removed 6 group(s) in total 2025-11-01 12:51:06.969363 | orchestrator | 2025-11-01 12:50:43 | INFO  | Inventory overwrite handling completed 2025-11-01 12:51:06.969372 | orchestrator | 2025-11-01 12:50:45 | INFO  | Starting merge of inventory files 2025-11-01 12:51:06.969382 | orchestrator | 2025-11-01 12:50:45 | INFO  | Inventory files merged successfully 2025-11-01 12:51:06.969391 | orchestrator | 2025-11-01 12:50:51 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-11-01 12:51:06.969401 | orchestrator | 2025-11-01 12:51:05 | INFO  | Successfully wrote ClusterShell configuration 2025-11-01 12:51:06.969411 | orchestrator | [master fbc3992] 2025-11-01-12-51 2025-11-01 12:51:06.969422 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-11-01 12:51:09.712791 | orchestrator | 2025-11-01 12:51:09 | INFO  | Task ddd1eec5-3764-483f-b37d-27c1a5f9eca9 (ceph-create-lvm-devices) was prepared for execution. 2025-11-01 12:51:09.712890 | orchestrator | 2025-11-01 12:51:09 | INFO  | It takes a moment until task ddd1eec5-3764-483f-b37d-27c1a5f9eca9 (ceph-create-lvm-devices) has been started and output is visible here. 2025-11-01 12:51:23.224210 | orchestrator | 2025-11-01 12:51:23.224317 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-11-01 12:51:23.224333 | orchestrator | 2025-11-01 12:51:23.224345 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-01 12:51:23.224357 | orchestrator | Saturday 01 November 2025 12:51:14 +0000 (0:00:00.414) 0:00:00.414 ***** 2025-11-01 12:51:23.224368 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 12:51:23.224379 | orchestrator | 2025-11-01 12:51:23.224390 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-01 12:51:23.224400 | orchestrator | Saturday 01 November 2025 12:51:14 +0000 (0:00:00.278) 0:00:00.693 ***** 2025-11-01 12:51:23.224411 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:51:23.224423 | orchestrator | 2025-11-01 12:51:23.224434 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:23.224445 | orchestrator | Saturday 01 November 2025 12:51:15 +0000 (0:00:00.265) 0:00:00.958 ***** 2025-11-01 12:51:23.224455 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-11-01 12:51:23.224468 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-11-01 12:51:23.224479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-11-01 12:51:23.224489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-11-01 12:51:23.224500 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-11-01 12:51:23.224511 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-11-01 12:51:23.224521 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-11-01 12:51:23.224532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-11-01 12:51:23.224542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-11-01 12:51:23.224553 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-11-01 12:51:23.224564 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-11-01 12:51:23.224574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-11-01 12:51:23.224585 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-11-01 12:51:23.224596 | orchestrator | 2025-11-01 12:51:23.224606 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:23.224642 | orchestrator | Saturday 01 November 2025 12:51:15 +0000 (0:00:00.683) 0:00:01.641 ***** 2025-11-01 12:51:23.224654 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:23.224664 | orchestrator | 2025-11-01 12:51:23.224675 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:23.224700 | orchestrator | Saturday 01 November 2025 12:51:16 +0000 (0:00:00.196) 0:00:01.838 ***** 2025-11-01 12:51:23.224712 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:23.224724 | orchestrator | 2025-11-01 12:51:23.224736 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:23.224748 | orchestrator | Saturday 01 November 2025 12:51:16 +0000 (0:00:00.234) 0:00:02.073 ***** 2025-11-01 12:51:23.224765 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:23.224777 | orchestrator | 2025-11-01 12:51:23.224789 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:23.224801 | orchestrator | Saturday 01 November 2025 12:51:16 +0000 (0:00:00.218) 0:00:02.292 ***** 2025-11-01 12:51:23.224813 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:23.224825 | orchestrator | 2025-11-01 12:51:23.224836 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:23.224848 | orchestrator | Saturday 01 November 2025 12:51:16 +0000 (0:00:00.213) 0:00:02.506 ***** 2025-11-01 12:51:23.224860 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:23.224872 | orchestrator | 2025-11-01 12:51:23.224883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:23.224895 | orchestrator | Saturday 01 November 2025 12:51:17 +0000 (0:00:00.246) 0:00:02.753 ***** 2025-11-01 12:51:23.224908 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:23.224919 | orchestrator | 2025-11-01 12:51:23.224931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:23.224943 | orchestrator | Saturday 01 November 2025 12:51:17 +0000 (0:00:00.229) 0:00:02.982 ***** 2025-11-01 12:51:23.224955 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:23.224967 | orchestrator | 2025-11-01 12:51:23.224979 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:23.224991 | orchestrator | Saturday 01 November 2025 12:51:17 +0000 (0:00:00.260) 0:00:03.242 ***** 2025-11-01 12:51:23.225003 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:23.225015 | orchestrator | 2025-11-01 12:51:23.225026 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:23.225038 | orchestrator | Saturday 01 November 2025 12:51:17 +0000 (0:00:00.215) 0:00:03.457 ***** 2025-11-01 12:51:23.225051 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11) 2025-11-01 12:51:23.225063 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11) 2025-11-01 12:51:23.225075 | orchestrator | 2025-11-01 12:51:23.225087 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:23.225097 | orchestrator | Saturday 01 November 2025 12:51:18 +0000 (0:00:00.459) 0:00:03.917 ***** 2025-11-01 12:51:23.225123 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6d5232a6-49c3-4ba2-8072-69b94c6f6826) 2025-11-01 12:51:23.225135 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6d5232a6-49c3-4ba2-8072-69b94c6f6826) 2025-11-01 12:51:23.225146 | orchestrator | 2025-11-01 12:51:23.225157 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:23.225168 | orchestrator | Saturday 01 November 2025 12:51:18 +0000 (0:00:00.709) 0:00:04.626 ***** 2025-11-01 12:51:23.225178 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_caf17145-8e33-4113-9dc7-3e1268f339ef) 2025-11-01 12:51:23.225209 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_caf17145-8e33-4113-9dc7-3e1268f339ef) 2025-11-01 12:51:23.225220 | orchestrator | 2025-11-01 12:51:23.225231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:23.225250 | orchestrator | Saturday 01 November 2025 12:51:19 +0000 (0:00:00.753) 0:00:05.380 ***** 2025-11-01 12:51:23.225260 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b36ff255-a328-4794-8843-53478b92bf6f) 2025-11-01 12:51:23.225271 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b36ff255-a328-4794-8843-53478b92bf6f) 2025-11-01 12:51:23.225282 | orchestrator | 2025-11-01 12:51:23.225292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:23.225303 | orchestrator | Saturday 01 November 2025 12:51:20 +0000 (0:00:00.918) 0:00:06.299 ***** 2025-11-01 12:51:23.225313 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-01 12:51:23.225324 | orchestrator | 2025-11-01 12:51:23.225335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:23.225346 | orchestrator | Saturday 01 November 2025 12:51:20 +0000 (0:00:00.328) 0:00:06.627 ***** 2025-11-01 12:51:23.225356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-11-01 12:51:23.225367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-11-01 12:51:23.225378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-11-01 12:51:23.225388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-11-01 12:51:23.225399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-11-01 12:51:23.225410 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-11-01 12:51:23.225420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-11-01 12:51:23.225431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-11-01 12:51:23.225441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-11-01 12:51:23.225452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-11-01 12:51:23.225462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-11-01 12:51:23.225473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-11-01 12:51:23.225484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-11-01 12:51:23.225494 | orchestrator | 2025-11-01 12:51:23.225505 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:23.225516 | orchestrator | Saturday 01 November 2025 12:51:21 +0000 (0:00:00.635) 0:00:07.263 ***** 2025-11-01 12:51:23.225527 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:23.225538 | orchestrator | 2025-11-01 12:51:23.225549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:23.225559 | orchestrator | Saturday 01 November 2025 12:51:21 +0000 (0:00:00.231) 0:00:07.494 ***** 2025-11-01 12:51:23.225570 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:23.225581 | orchestrator | 2025-11-01 12:51:23.225592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:23.225602 | orchestrator | Saturday 01 November 2025 12:51:22 +0000 (0:00:00.221) 0:00:07.716 ***** 2025-11-01 12:51:23.225613 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:23.225624 | orchestrator | 2025-11-01 12:51:23.225634 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:23.225645 | orchestrator | Saturday 01 November 2025 12:51:22 +0000 (0:00:00.199) 0:00:07.915 ***** 2025-11-01 12:51:23.225656 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:23.225667 | orchestrator | 2025-11-01 12:51:23.225677 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:23.225695 | orchestrator | Saturday 01 November 2025 12:51:22 +0000 (0:00:00.229) 0:00:08.145 ***** 2025-11-01 12:51:23.225705 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:23.225716 | orchestrator | 2025-11-01 12:51:23.225727 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:23.225738 | orchestrator | Saturday 01 November 2025 12:51:22 +0000 (0:00:00.196) 0:00:08.342 ***** 2025-11-01 12:51:23.225748 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:23.225759 | orchestrator | 2025-11-01 12:51:23.225770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:23.225780 | orchestrator | Saturday 01 November 2025 12:51:22 +0000 (0:00:00.183) 0:00:08.525 ***** 2025-11-01 12:51:23.225791 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:23.225802 | orchestrator | 2025-11-01 12:51:23.225813 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:23.225823 | orchestrator | Saturday 01 November 2025 12:51:23 +0000 (0:00:00.212) 0:00:08.738 ***** 2025-11-01 12:51:23.225840 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:31.590644 | orchestrator | 2025-11-01 12:51:31.590752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:31.590769 | orchestrator | Saturday 01 November 2025 12:51:23 +0000 (0:00:00.187) 0:00:08.925 ***** 2025-11-01 12:51:31.590781 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-11-01 12:51:31.590793 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-11-01 12:51:31.590804 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-11-01 12:51:31.590816 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-11-01 12:51:31.590827 | orchestrator | 2025-11-01 12:51:31.590838 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:31.590849 | orchestrator | Saturday 01 November 2025 12:51:24 +0000 (0:00:01.051) 0:00:09.976 ***** 2025-11-01 12:51:31.590860 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:31.590871 | orchestrator | 2025-11-01 12:51:31.590883 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:31.590893 | orchestrator | Saturday 01 November 2025 12:51:24 +0000 (0:00:00.176) 0:00:10.153 ***** 2025-11-01 12:51:31.590904 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:31.590915 | orchestrator | 2025-11-01 12:51:31.590926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:31.590937 | orchestrator | Saturday 01 November 2025 12:51:24 +0000 (0:00:00.221) 0:00:10.375 ***** 2025-11-01 12:51:31.590948 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:31.590959 | orchestrator | 2025-11-01 12:51:31.590970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:31.590981 | orchestrator | Saturday 01 November 2025 12:51:24 +0000 (0:00:00.185) 0:00:10.561 ***** 2025-11-01 12:51:31.590992 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:31.591003 | orchestrator | 2025-11-01 12:51:31.591014 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-11-01 12:51:31.591025 | orchestrator | Saturday 01 November 2025 12:51:25 +0000 (0:00:00.253) 0:00:10.814 ***** 2025-11-01 12:51:31.591035 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:31.591046 | orchestrator | 2025-11-01 12:51:31.591057 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-11-01 12:51:31.591068 | orchestrator | Saturday 01 November 2025 12:51:25 +0000 (0:00:00.174) 0:00:10.989 ***** 2025-11-01 12:51:31.591079 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd83d2135-3529-5759-9738-6f5d85bcdaef'}}) 2025-11-01 12:51:31.591090 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2d34deeb-c147-51f6-865b-40ba131b62ad'}}) 2025-11-01 12:51:31.591101 | orchestrator | 2025-11-01 12:51:31.591112 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-11-01 12:51:31.591123 | orchestrator | Saturday 01 November 2025 12:51:25 +0000 (0:00:00.176) 0:00:11.166 ***** 2025-11-01 12:51:31.591157 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'}) 2025-11-01 12:51:31.591169 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'}) 2025-11-01 12:51:31.591180 | orchestrator | 2025-11-01 12:51:31.591260 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-11-01 12:51:31.591279 | orchestrator | Saturday 01 November 2025 12:51:27 +0000 (0:00:01.956) 0:00:13.122 ***** 2025-11-01 12:51:31.591292 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'})  2025-11-01 12:51:31.591306 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'})  2025-11-01 12:51:31.591319 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:31.591331 | orchestrator | 2025-11-01 12:51:31.591343 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-11-01 12:51:31.591355 | orchestrator | Saturday 01 November 2025 12:51:27 +0000 (0:00:00.162) 0:00:13.285 ***** 2025-11-01 12:51:31.591368 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'}) 2025-11-01 12:51:31.591380 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'}) 2025-11-01 12:51:31.591392 | orchestrator | 2025-11-01 12:51:31.591404 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-11-01 12:51:31.591416 | orchestrator | Saturday 01 November 2025 12:51:29 +0000 (0:00:01.555) 0:00:14.841 ***** 2025-11-01 12:51:31.591427 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'})  2025-11-01 12:51:31.591441 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'})  2025-11-01 12:51:31.591453 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:31.591465 | orchestrator | 2025-11-01 12:51:31.591478 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-11-01 12:51:31.591490 | orchestrator | Saturday 01 November 2025 12:51:29 +0000 (0:00:00.162) 0:00:15.003 ***** 2025-11-01 12:51:31.591503 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:31.591516 | orchestrator | 2025-11-01 12:51:31.591528 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-11-01 12:51:31.591556 | orchestrator | Saturday 01 November 2025 12:51:29 +0000 (0:00:00.149) 0:00:15.153 ***** 2025-11-01 12:51:31.591569 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'})  2025-11-01 12:51:31.591582 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'})  2025-11-01 12:51:31.591593 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:31.591604 | orchestrator | 2025-11-01 12:51:31.591614 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-11-01 12:51:31.591625 | orchestrator | Saturday 01 November 2025 12:51:29 +0000 (0:00:00.487) 0:00:15.640 ***** 2025-11-01 12:51:31.591636 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:31.591646 | orchestrator | 2025-11-01 12:51:31.591657 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-11-01 12:51:31.591668 | orchestrator | Saturday 01 November 2025 12:51:30 +0000 (0:00:00.150) 0:00:15.791 ***** 2025-11-01 12:51:31.591679 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'})  2025-11-01 12:51:31.591698 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'})  2025-11-01 12:51:31.591708 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:31.591719 | orchestrator | 2025-11-01 12:51:31.591730 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-11-01 12:51:31.591741 | orchestrator | Saturday 01 November 2025 12:51:30 +0000 (0:00:00.177) 0:00:15.968 ***** 2025-11-01 12:51:31.591751 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:31.591762 | orchestrator | 2025-11-01 12:51:31.591773 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-11-01 12:51:31.591784 | orchestrator | Saturday 01 November 2025 12:51:30 +0000 (0:00:00.142) 0:00:16.111 ***** 2025-11-01 12:51:31.591794 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'})  2025-11-01 12:51:31.591805 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'})  2025-11-01 12:51:31.591816 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:31.591827 | orchestrator | 2025-11-01 12:51:31.591837 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-11-01 12:51:31.591848 | orchestrator | Saturday 01 November 2025 12:51:30 +0000 (0:00:00.171) 0:00:16.282 ***** 2025-11-01 12:51:31.591859 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:51:31.591870 | orchestrator | 2025-11-01 12:51:31.591881 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-11-01 12:51:31.591892 | orchestrator | Saturday 01 November 2025 12:51:30 +0000 (0:00:00.169) 0:00:16.452 ***** 2025-11-01 12:51:31.591907 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'})  2025-11-01 12:51:31.591918 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'})  2025-11-01 12:51:31.591929 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:31.591940 | orchestrator | 2025-11-01 12:51:31.591951 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-11-01 12:51:31.591962 | orchestrator | Saturday 01 November 2025 12:51:30 +0000 (0:00:00.165) 0:00:16.618 ***** 2025-11-01 12:51:31.591972 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'})  2025-11-01 12:51:31.591983 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'})  2025-11-01 12:51:31.591994 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:31.592005 | orchestrator | 2025-11-01 12:51:31.592016 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-11-01 12:51:31.592027 | orchestrator | Saturday 01 November 2025 12:51:31 +0000 (0:00:00.172) 0:00:16.790 ***** 2025-11-01 12:51:31.592037 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'})  2025-11-01 12:51:31.592048 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'})  2025-11-01 12:51:31.592059 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:31.592070 | orchestrator | 2025-11-01 12:51:31.592081 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-11-01 12:51:31.592092 | orchestrator | Saturday 01 November 2025 12:51:31 +0000 (0:00:00.195) 0:00:16.986 ***** 2025-11-01 12:51:31.592102 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:31.592119 | orchestrator | 2025-11-01 12:51:31.592130 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-11-01 12:51:31.592141 | orchestrator | Saturday 01 November 2025 12:51:31 +0000 (0:00:00.160) 0:00:17.147 ***** 2025-11-01 12:51:31.592152 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:31.592163 | orchestrator | 2025-11-01 12:51:31.592179 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-11-01 12:51:38.735034 | orchestrator | Saturday 01 November 2025 12:51:31 +0000 (0:00:00.143) 0:00:17.290 ***** 2025-11-01 12:51:38.735146 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.735163 | orchestrator | 2025-11-01 12:51:38.735176 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-11-01 12:51:38.735247 | orchestrator | Saturday 01 November 2025 12:51:31 +0000 (0:00:00.148) 0:00:17.439 ***** 2025-11-01 12:51:38.735259 | orchestrator | ok: [testbed-node-3] => { 2025-11-01 12:51:38.735271 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-11-01 12:51:38.735283 | orchestrator | } 2025-11-01 12:51:38.735294 | orchestrator | 2025-11-01 12:51:38.735306 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-11-01 12:51:38.735317 | orchestrator | Saturday 01 November 2025 12:51:32 +0000 (0:00:00.378) 0:00:17.817 ***** 2025-11-01 12:51:38.735328 | orchestrator | ok: [testbed-node-3] => { 2025-11-01 12:51:38.735339 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-11-01 12:51:38.735350 | orchestrator | } 2025-11-01 12:51:38.735361 | orchestrator | 2025-11-01 12:51:38.735372 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-11-01 12:51:38.735383 | orchestrator | Saturday 01 November 2025 12:51:32 +0000 (0:00:00.175) 0:00:17.992 ***** 2025-11-01 12:51:38.735394 | orchestrator | ok: [testbed-node-3] => { 2025-11-01 12:51:38.735405 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-11-01 12:51:38.735416 | orchestrator | } 2025-11-01 12:51:38.735428 | orchestrator | 2025-11-01 12:51:38.735439 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-11-01 12:51:38.735450 | orchestrator | Saturday 01 November 2025 12:51:32 +0000 (0:00:00.166) 0:00:18.158 ***** 2025-11-01 12:51:38.735461 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:51:38.735472 | orchestrator | 2025-11-01 12:51:38.735483 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-11-01 12:51:38.735494 | orchestrator | Saturday 01 November 2025 12:51:33 +0000 (0:00:00.730) 0:00:18.889 ***** 2025-11-01 12:51:38.735505 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:51:38.735516 | orchestrator | 2025-11-01 12:51:38.735527 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-11-01 12:51:38.735538 | orchestrator | Saturday 01 November 2025 12:51:33 +0000 (0:00:00.529) 0:00:19.419 ***** 2025-11-01 12:51:38.735549 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:51:38.735560 | orchestrator | 2025-11-01 12:51:38.735571 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-11-01 12:51:38.735584 | orchestrator | Saturday 01 November 2025 12:51:34 +0000 (0:00:00.567) 0:00:19.986 ***** 2025-11-01 12:51:38.735596 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:51:38.735608 | orchestrator | 2025-11-01 12:51:38.735620 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-11-01 12:51:38.735632 | orchestrator | Saturday 01 November 2025 12:51:34 +0000 (0:00:00.197) 0:00:20.184 ***** 2025-11-01 12:51:38.735644 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.735656 | orchestrator | 2025-11-01 12:51:38.735668 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-11-01 12:51:38.735680 | orchestrator | Saturday 01 November 2025 12:51:34 +0000 (0:00:00.193) 0:00:20.378 ***** 2025-11-01 12:51:38.735692 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.735704 | orchestrator | 2025-11-01 12:51:38.735716 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-11-01 12:51:38.735729 | orchestrator | Saturday 01 November 2025 12:51:34 +0000 (0:00:00.176) 0:00:20.555 ***** 2025-11-01 12:51:38.735763 | orchestrator | ok: [testbed-node-3] => { 2025-11-01 12:51:38.735776 | orchestrator |  "vgs_report": { 2025-11-01 12:51:38.735789 | orchestrator |  "vg": [] 2025-11-01 12:51:38.735801 | orchestrator |  } 2025-11-01 12:51:38.735813 | orchestrator | } 2025-11-01 12:51:38.735826 | orchestrator | 2025-11-01 12:51:38.735838 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-11-01 12:51:38.735850 | orchestrator | Saturday 01 November 2025 12:51:35 +0000 (0:00:00.231) 0:00:20.786 ***** 2025-11-01 12:51:38.735862 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.735874 | orchestrator | 2025-11-01 12:51:38.735885 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-11-01 12:51:38.735898 | orchestrator | Saturday 01 November 2025 12:51:35 +0000 (0:00:00.184) 0:00:20.971 ***** 2025-11-01 12:51:38.735910 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.735922 | orchestrator | 2025-11-01 12:51:38.735934 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-11-01 12:51:38.735945 | orchestrator | Saturday 01 November 2025 12:51:35 +0000 (0:00:00.130) 0:00:21.101 ***** 2025-11-01 12:51:38.735956 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.735967 | orchestrator | 2025-11-01 12:51:38.735977 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-11-01 12:51:38.735988 | orchestrator | Saturday 01 November 2025 12:51:35 +0000 (0:00:00.453) 0:00:21.555 ***** 2025-11-01 12:51:38.735999 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.736010 | orchestrator | 2025-11-01 12:51:38.736020 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-11-01 12:51:38.736031 | orchestrator | Saturday 01 November 2025 12:51:35 +0000 (0:00:00.151) 0:00:21.706 ***** 2025-11-01 12:51:38.736042 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.736053 | orchestrator | 2025-11-01 12:51:38.736080 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-11-01 12:51:38.736092 | orchestrator | Saturday 01 November 2025 12:51:36 +0000 (0:00:00.164) 0:00:21.871 ***** 2025-11-01 12:51:38.736103 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.736113 | orchestrator | 2025-11-01 12:51:38.736124 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-11-01 12:51:38.736135 | orchestrator | Saturday 01 November 2025 12:51:36 +0000 (0:00:00.153) 0:00:22.024 ***** 2025-11-01 12:51:38.736146 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.736156 | orchestrator | 2025-11-01 12:51:38.736167 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-11-01 12:51:38.736178 | orchestrator | Saturday 01 November 2025 12:51:36 +0000 (0:00:00.157) 0:00:22.182 ***** 2025-11-01 12:51:38.736207 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.736218 | orchestrator | 2025-11-01 12:51:38.736229 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-11-01 12:51:38.736256 | orchestrator | Saturday 01 November 2025 12:51:36 +0000 (0:00:00.142) 0:00:22.324 ***** 2025-11-01 12:51:38.736267 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.736278 | orchestrator | 2025-11-01 12:51:38.736289 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-11-01 12:51:38.736299 | orchestrator | Saturday 01 November 2025 12:51:36 +0000 (0:00:00.155) 0:00:22.479 ***** 2025-11-01 12:51:38.736310 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.736321 | orchestrator | 2025-11-01 12:51:38.736331 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-11-01 12:51:38.736342 | orchestrator | Saturday 01 November 2025 12:51:36 +0000 (0:00:00.145) 0:00:22.624 ***** 2025-11-01 12:51:38.736352 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.736363 | orchestrator | 2025-11-01 12:51:38.736374 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-11-01 12:51:38.736384 | orchestrator | Saturday 01 November 2025 12:51:37 +0000 (0:00:00.140) 0:00:22.765 ***** 2025-11-01 12:51:38.736395 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.736406 | orchestrator | 2025-11-01 12:51:38.736425 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-11-01 12:51:38.736436 | orchestrator | Saturday 01 November 2025 12:51:37 +0000 (0:00:00.140) 0:00:22.905 ***** 2025-11-01 12:51:38.736447 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.736458 | orchestrator | 2025-11-01 12:51:38.736469 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-11-01 12:51:38.736479 | orchestrator | Saturday 01 November 2025 12:51:37 +0000 (0:00:00.142) 0:00:23.047 ***** 2025-11-01 12:51:38.736490 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.736501 | orchestrator | 2025-11-01 12:51:38.736511 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-11-01 12:51:38.736522 | orchestrator | Saturday 01 November 2025 12:51:37 +0000 (0:00:00.135) 0:00:23.183 ***** 2025-11-01 12:51:38.736534 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'})  2025-11-01 12:51:38.736546 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'})  2025-11-01 12:51:38.736557 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.736568 | orchestrator | 2025-11-01 12:51:38.736578 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-11-01 12:51:38.736589 | orchestrator | Saturday 01 November 2025 12:51:37 +0000 (0:00:00.390) 0:00:23.573 ***** 2025-11-01 12:51:38.736600 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'})  2025-11-01 12:51:38.736611 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'})  2025-11-01 12:51:38.736622 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.736632 | orchestrator | 2025-11-01 12:51:38.736643 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-11-01 12:51:38.736654 | orchestrator | Saturday 01 November 2025 12:51:38 +0000 (0:00:00.161) 0:00:23.734 ***** 2025-11-01 12:51:38.736670 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'})  2025-11-01 12:51:38.736681 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'})  2025-11-01 12:51:38.736692 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.736703 | orchestrator | 2025-11-01 12:51:38.736713 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-11-01 12:51:38.736724 | orchestrator | Saturday 01 November 2025 12:51:38 +0000 (0:00:00.172) 0:00:23.907 ***** 2025-11-01 12:51:38.736735 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'})  2025-11-01 12:51:38.736745 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'})  2025-11-01 12:51:38.736756 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.736767 | orchestrator | 2025-11-01 12:51:38.736777 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-11-01 12:51:38.736788 | orchestrator | Saturday 01 November 2025 12:51:38 +0000 (0:00:00.155) 0:00:24.062 ***** 2025-11-01 12:51:38.736799 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'})  2025-11-01 12:51:38.736809 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'})  2025-11-01 12:51:38.736820 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:38.736837 | orchestrator | 2025-11-01 12:51:38.736848 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-11-01 12:51:38.736859 | orchestrator | Saturday 01 November 2025 12:51:38 +0000 (0:00:00.201) 0:00:24.263 ***** 2025-11-01 12:51:38.736869 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'})  2025-11-01 12:51:38.736886 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'})  2025-11-01 12:51:44.606569 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:44.606668 | orchestrator | 2025-11-01 12:51:44.606684 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-11-01 12:51:44.606698 | orchestrator | Saturday 01 November 2025 12:51:38 +0000 (0:00:00.173) 0:00:24.437 ***** 2025-11-01 12:51:44.606710 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'})  2025-11-01 12:51:44.606723 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'})  2025-11-01 12:51:44.606734 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:44.606745 | orchestrator | 2025-11-01 12:51:44.606757 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-11-01 12:51:44.606768 | orchestrator | Saturday 01 November 2025 12:51:38 +0000 (0:00:00.178) 0:00:24.616 ***** 2025-11-01 12:51:44.606779 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'})  2025-11-01 12:51:44.606790 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'})  2025-11-01 12:51:44.606801 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:44.606812 | orchestrator | 2025-11-01 12:51:44.606823 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-11-01 12:51:44.606834 | orchestrator | Saturday 01 November 2025 12:51:39 +0000 (0:00:00.166) 0:00:24.782 ***** 2025-11-01 12:51:44.606845 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:51:44.606856 | orchestrator | 2025-11-01 12:51:44.606867 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-11-01 12:51:44.606878 | orchestrator | Saturday 01 November 2025 12:51:39 +0000 (0:00:00.514) 0:00:25.297 ***** 2025-11-01 12:51:44.606889 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:51:44.606899 | orchestrator | 2025-11-01 12:51:44.606910 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-11-01 12:51:44.606921 | orchestrator | Saturday 01 November 2025 12:51:40 +0000 (0:00:00.520) 0:00:25.818 ***** 2025-11-01 12:51:44.606931 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:51:44.606942 | orchestrator | 2025-11-01 12:51:44.606953 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-11-01 12:51:44.606964 | orchestrator | Saturday 01 November 2025 12:51:40 +0000 (0:00:00.170) 0:00:25.988 ***** 2025-11-01 12:51:44.606975 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'vg_name': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'}) 2025-11-01 12:51:44.606986 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'vg_name': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'}) 2025-11-01 12:51:44.606997 | orchestrator | 2025-11-01 12:51:44.607008 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-11-01 12:51:44.607019 | orchestrator | Saturday 01 November 2025 12:51:40 +0000 (0:00:00.185) 0:00:26.174 ***** 2025-11-01 12:51:44.607030 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'})  2025-11-01 12:51:44.607063 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'})  2025-11-01 12:51:44.607075 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:44.607086 | orchestrator | 2025-11-01 12:51:44.607096 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-11-01 12:51:44.607107 | orchestrator | Saturday 01 November 2025 12:51:40 +0000 (0:00:00.406) 0:00:26.580 ***** 2025-11-01 12:51:44.607120 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'})  2025-11-01 12:51:44.607132 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'})  2025-11-01 12:51:44.607145 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:44.607157 | orchestrator | 2025-11-01 12:51:44.607169 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-11-01 12:51:44.607182 | orchestrator | Saturday 01 November 2025 12:51:41 +0000 (0:00:00.218) 0:00:26.799 ***** 2025-11-01 12:51:44.607219 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'})  2025-11-01 12:51:44.607232 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'})  2025-11-01 12:51:44.607245 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:51:44.607256 | orchestrator | 2025-11-01 12:51:44.607267 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-11-01 12:51:44.607278 | orchestrator | Saturday 01 November 2025 12:51:41 +0000 (0:00:00.168) 0:00:26.967 ***** 2025-11-01 12:51:44.607289 | orchestrator | ok: [testbed-node-3] => { 2025-11-01 12:51:44.607299 | orchestrator |  "lvm_report": { 2025-11-01 12:51:44.607310 | orchestrator |  "lv": [ 2025-11-01 12:51:44.607321 | orchestrator |  { 2025-11-01 12:51:44.607349 | orchestrator |  "lv_name": "osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad", 2025-11-01 12:51:44.607361 | orchestrator |  "vg_name": "ceph-2d34deeb-c147-51f6-865b-40ba131b62ad" 2025-11-01 12:51:44.607372 | orchestrator |  }, 2025-11-01 12:51:44.607382 | orchestrator |  { 2025-11-01 12:51:44.607393 | orchestrator |  "lv_name": "osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef", 2025-11-01 12:51:44.607404 | orchestrator |  "vg_name": "ceph-d83d2135-3529-5759-9738-6f5d85bcdaef" 2025-11-01 12:51:44.607415 | orchestrator |  } 2025-11-01 12:51:44.607425 | orchestrator |  ], 2025-11-01 12:51:44.607436 | orchestrator |  "pv": [ 2025-11-01 12:51:44.607447 | orchestrator |  { 2025-11-01 12:51:44.607458 | orchestrator |  "pv_name": "/dev/sdb", 2025-11-01 12:51:44.607468 | orchestrator |  "vg_name": "ceph-d83d2135-3529-5759-9738-6f5d85bcdaef" 2025-11-01 12:51:44.607479 | orchestrator |  }, 2025-11-01 12:51:44.607490 | orchestrator |  { 2025-11-01 12:51:44.607501 | orchestrator |  "pv_name": "/dev/sdc", 2025-11-01 12:51:44.607511 | orchestrator |  "vg_name": "ceph-2d34deeb-c147-51f6-865b-40ba131b62ad" 2025-11-01 12:51:44.607522 | orchestrator |  } 2025-11-01 12:51:44.607533 | orchestrator |  ] 2025-11-01 12:51:44.607544 | orchestrator |  } 2025-11-01 12:51:44.607555 | orchestrator | } 2025-11-01 12:51:44.607566 | orchestrator | 2025-11-01 12:51:44.607577 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-11-01 12:51:44.607588 | orchestrator | 2025-11-01 12:51:44.607598 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-01 12:51:44.607609 | orchestrator | Saturday 01 November 2025 12:51:41 +0000 (0:00:00.320) 0:00:27.288 ***** 2025-11-01 12:51:44.607620 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-11-01 12:51:44.607638 | orchestrator | 2025-11-01 12:51:44.607650 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-01 12:51:44.607660 | orchestrator | Saturday 01 November 2025 12:51:41 +0000 (0:00:00.286) 0:00:27.574 ***** 2025-11-01 12:51:44.607671 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:51:44.607682 | orchestrator | 2025-11-01 12:51:44.607692 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:44.607703 | orchestrator | Saturday 01 November 2025 12:51:42 +0000 (0:00:00.266) 0:00:27.841 ***** 2025-11-01 12:51:44.607729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-11-01 12:51:44.607741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-11-01 12:51:44.607752 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-11-01 12:51:44.607763 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-11-01 12:51:44.607773 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-11-01 12:51:44.607784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-11-01 12:51:44.607795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-11-01 12:51:44.607819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-11-01 12:51:44.607830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-11-01 12:51:44.607841 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-11-01 12:51:44.607851 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-11-01 12:51:44.607862 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-11-01 12:51:44.607873 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-11-01 12:51:44.607883 | orchestrator | 2025-11-01 12:51:44.607894 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:44.607905 | orchestrator | Saturday 01 November 2025 12:51:42 +0000 (0:00:00.436) 0:00:28.278 ***** 2025-11-01 12:51:44.607915 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:44.607926 | orchestrator | 2025-11-01 12:51:44.607937 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:44.607947 | orchestrator | Saturday 01 November 2025 12:51:42 +0000 (0:00:00.208) 0:00:28.487 ***** 2025-11-01 12:51:44.607958 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:44.607969 | orchestrator | 2025-11-01 12:51:44.607980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:44.607990 | orchestrator | Saturday 01 November 2025 12:51:42 +0000 (0:00:00.204) 0:00:28.691 ***** 2025-11-01 12:51:44.608001 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:44.608012 | orchestrator | 2025-11-01 12:51:44.608023 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:44.608033 | orchestrator | Saturday 01 November 2025 12:51:43 +0000 (0:00:00.728) 0:00:29.419 ***** 2025-11-01 12:51:44.608044 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:44.608055 | orchestrator | 2025-11-01 12:51:44.608065 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:44.608076 | orchestrator | Saturday 01 November 2025 12:51:43 +0000 (0:00:00.222) 0:00:29.642 ***** 2025-11-01 12:51:44.608087 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:44.608098 | orchestrator | 2025-11-01 12:51:44.608108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:44.608119 | orchestrator | Saturday 01 November 2025 12:51:44 +0000 (0:00:00.233) 0:00:29.875 ***** 2025-11-01 12:51:44.608129 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:44.608140 | orchestrator | 2025-11-01 12:51:44.608158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:44.608169 | orchestrator | Saturday 01 November 2025 12:51:44 +0000 (0:00:00.213) 0:00:30.089 ***** 2025-11-01 12:51:44.608179 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:44.608207 | orchestrator | 2025-11-01 12:51:44.608225 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:56.695541 | orchestrator | Saturday 01 November 2025 12:51:44 +0000 (0:00:00.218) 0:00:30.307 ***** 2025-11-01 12:51:56.695644 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:56.695660 | orchestrator | 2025-11-01 12:51:56.695673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:56.695685 | orchestrator | Saturday 01 November 2025 12:51:44 +0000 (0:00:00.227) 0:00:30.535 ***** 2025-11-01 12:51:56.695696 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df) 2025-11-01 12:51:56.695708 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df) 2025-11-01 12:51:56.695719 | orchestrator | 2025-11-01 12:51:56.695730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:56.695741 | orchestrator | Saturday 01 November 2025 12:51:45 +0000 (0:00:00.473) 0:00:31.008 ***** 2025-11-01 12:51:56.695752 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5ce69623-bff4-4254-af6b-7ef1616921db) 2025-11-01 12:51:56.695763 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5ce69623-bff4-4254-af6b-7ef1616921db) 2025-11-01 12:51:56.695773 | orchestrator | 2025-11-01 12:51:56.695784 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:56.695795 | orchestrator | Saturday 01 November 2025 12:51:45 +0000 (0:00:00.581) 0:00:31.590 ***** 2025-11-01 12:51:56.695805 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0d74391b-0b8f-495c-a577-c6c4d7ebf805) 2025-11-01 12:51:56.695816 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0d74391b-0b8f-495c-a577-c6c4d7ebf805) 2025-11-01 12:51:56.695827 | orchestrator | 2025-11-01 12:51:56.695837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:56.695848 | orchestrator | Saturday 01 November 2025 12:51:46 +0000 (0:00:00.492) 0:00:32.082 ***** 2025-11-01 12:51:56.695859 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ce385ad4-e039-43b9-b94b-c72aec6ecf03) 2025-11-01 12:51:56.695870 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ce385ad4-e039-43b9-b94b-c72aec6ecf03) 2025-11-01 12:51:56.695880 | orchestrator | 2025-11-01 12:51:56.695891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:51:56.695902 | orchestrator | Saturday 01 November 2025 12:51:47 +0000 (0:00:00.715) 0:00:32.798 ***** 2025-11-01 12:51:56.695912 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-01 12:51:56.695923 | orchestrator | 2025-11-01 12:51:56.695934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:56.695945 | orchestrator | Saturday 01 November 2025 12:51:47 +0000 (0:00:00.626) 0:00:33.425 ***** 2025-11-01 12:51:56.695956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-11-01 12:51:56.695982 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-11-01 12:51:56.695993 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-11-01 12:51:56.696004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-11-01 12:51:56.696015 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-11-01 12:51:56.696025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-11-01 12:51:56.696036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-11-01 12:51:56.696066 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-11-01 12:51:56.696080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-11-01 12:51:56.696092 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-11-01 12:51:56.696104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-11-01 12:51:56.696116 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-11-01 12:51:56.696127 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-11-01 12:51:56.696139 | orchestrator | 2025-11-01 12:51:56.696151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:56.696163 | orchestrator | Saturday 01 November 2025 12:51:48 +0000 (0:00:00.977) 0:00:34.403 ***** 2025-11-01 12:51:56.696175 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:56.696212 | orchestrator | 2025-11-01 12:51:56.696225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:56.696237 | orchestrator | Saturday 01 November 2025 12:51:48 +0000 (0:00:00.207) 0:00:34.610 ***** 2025-11-01 12:51:56.696249 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:56.696262 | orchestrator | 2025-11-01 12:51:56.696275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:56.696287 | orchestrator | Saturday 01 November 2025 12:51:49 +0000 (0:00:00.237) 0:00:34.848 ***** 2025-11-01 12:51:56.696299 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:56.696311 | orchestrator | 2025-11-01 12:51:56.696323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:56.696335 | orchestrator | Saturday 01 November 2025 12:51:49 +0000 (0:00:00.268) 0:00:35.117 ***** 2025-11-01 12:51:56.696347 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:56.696359 | orchestrator | 2025-11-01 12:51:56.696387 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:56.696401 | orchestrator | Saturday 01 November 2025 12:51:49 +0000 (0:00:00.215) 0:00:35.332 ***** 2025-11-01 12:51:56.696413 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:56.696426 | orchestrator | 2025-11-01 12:51:56.696437 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:56.696447 | orchestrator | Saturday 01 November 2025 12:51:49 +0000 (0:00:00.233) 0:00:35.566 ***** 2025-11-01 12:51:56.696458 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:56.696469 | orchestrator | 2025-11-01 12:51:56.696479 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:56.696490 | orchestrator | Saturday 01 November 2025 12:51:50 +0000 (0:00:00.226) 0:00:35.793 ***** 2025-11-01 12:51:56.696501 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:56.696511 | orchestrator | 2025-11-01 12:51:56.696522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:56.696533 | orchestrator | Saturday 01 November 2025 12:51:50 +0000 (0:00:00.232) 0:00:36.025 ***** 2025-11-01 12:51:56.696543 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:56.696554 | orchestrator | 2025-11-01 12:51:56.696565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:56.696575 | orchestrator | Saturday 01 November 2025 12:51:50 +0000 (0:00:00.223) 0:00:36.248 ***** 2025-11-01 12:51:56.696586 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-11-01 12:51:56.696597 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-11-01 12:51:56.696608 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-11-01 12:51:56.696619 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-11-01 12:51:56.696629 | orchestrator | 2025-11-01 12:51:56.696641 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:56.696652 | orchestrator | Saturday 01 November 2025 12:51:51 +0000 (0:00:00.973) 0:00:37.222 ***** 2025-11-01 12:51:56.696670 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:56.696681 | orchestrator | 2025-11-01 12:51:56.696692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:56.696702 | orchestrator | Saturday 01 November 2025 12:51:51 +0000 (0:00:00.222) 0:00:37.444 ***** 2025-11-01 12:51:56.696713 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:56.696724 | orchestrator | 2025-11-01 12:51:56.696734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:56.696745 | orchestrator | Saturday 01 November 2025 12:51:52 +0000 (0:00:00.741) 0:00:38.186 ***** 2025-11-01 12:51:56.696755 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:56.696766 | orchestrator | 2025-11-01 12:51:56.696777 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:51:56.696788 | orchestrator | Saturday 01 November 2025 12:51:52 +0000 (0:00:00.241) 0:00:38.427 ***** 2025-11-01 12:51:56.696798 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:56.696809 | orchestrator | 2025-11-01 12:51:56.696820 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-11-01 12:51:56.696830 | orchestrator | Saturday 01 November 2025 12:51:52 +0000 (0:00:00.203) 0:00:38.631 ***** 2025-11-01 12:51:56.696841 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:56.696852 | orchestrator | 2025-11-01 12:51:56.696862 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-11-01 12:51:56.696873 | orchestrator | Saturday 01 November 2025 12:51:53 +0000 (0:00:00.162) 0:00:38.794 ***** 2025-11-01 12:51:56.696884 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '277f9d3d-0c20-556e-833f-7bea0f2408d1'}}) 2025-11-01 12:51:56.696895 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '780930f3-bf13-5252-a15a-5f9f469ca774'}}) 2025-11-01 12:51:56.696905 | orchestrator | 2025-11-01 12:51:56.696916 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-11-01 12:51:56.696927 | orchestrator | Saturday 01 November 2025 12:51:53 +0000 (0:00:00.198) 0:00:38.992 ***** 2025-11-01 12:51:56.696938 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'}) 2025-11-01 12:51:56.696949 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'}) 2025-11-01 12:51:56.696960 | orchestrator | 2025-11-01 12:51:56.696971 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-11-01 12:51:56.696981 | orchestrator | Saturday 01 November 2025 12:51:55 +0000 (0:00:01.879) 0:00:40.872 ***** 2025-11-01 12:51:56.696992 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'})  2025-11-01 12:51:56.697004 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'})  2025-11-01 12:51:56.697014 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:51:56.697025 | orchestrator | 2025-11-01 12:51:56.697036 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-11-01 12:51:56.697046 | orchestrator | Saturday 01 November 2025 12:51:55 +0000 (0:00:00.169) 0:00:41.041 ***** 2025-11-01 12:51:56.697057 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'}) 2025-11-01 12:51:56.697068 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'}) 2025-11-01 12:51:56.697079 | orchestrator | 2025-11-01 12:51:56.697096 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-11-01 12:52:02.811312 | orchestrator | Saturday 01 November 2025 12:51:56 +0000 (0:00:01.351) 0:00:42.393 ***** 2025-11-01 12:52:02.811441 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'})  2025-11-01 12:52:02.811459 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'})  2025-11-01 12:52:02.811471 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:02.811483 | orchestrator | 2025-11-01 12:52:02.811495 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-11-01 12:52:02.811506 | orchestrator | Saturday 01 November 2025 12:51:56 +0000 (0:00:00.153) 0:00:42.547 ***** 2025-11-01 12:52:02.811517 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:02.811528 | orchestrator | 2025-11-01 12:52:02.811539 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-11-01 12:52:02.811550 | orchestrator | Saturday 01 November 2025 12:51:56 +0000 (0:00:00.158) 0:00:42.705 ***** 2025-11-01 12:52:02.811561 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'})  2025-11-01 12:52:02.811586 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'})  2025-11-01 12:52:02.811598 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:02.811608 | orchestrator | 2025-11-01 12:52:02.811619 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-11-01 12:52:02.811630 | orchestrator | Saturday 01 November 2025 12:51:57 +0000 (0:00:00.167) 0:00:42.873 ***** 2025-11-01 12:52:02.811641 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:02.811651 | orchestrator | 2025-11-01 12:52:02.811662 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-11-01 12:52:02.811673 | orchestrator | Saturday 01 November 2025 12:51:57 +0000 (0:00:00.132) 0:00:43.005 ***** 2025-11-01 12:52:02.811684 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'})  2025-11-01 12:52:02.811694 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'})  2025-11-01 12:52:02.811705 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:02.811716 | orchestrator | 2025-11-01 12:52:02.811727 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-11-01 12:52:02.811737 | orchestrator | Saturday 01 November 2025 12:51:57 +0000 (0:00:00.411) 0:00:43.416 ***** 2025-11-01 12:52:02.811753 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:02.811764 | orchestrator | 2025-11-01 12:52:02.811774 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-11-01 12:52:02.811785 | orchestrator | Saturday 01 November 2025 12:51:57 +0000 (0:00:00.148) 0:00:43.565 ***** 2025-11-01 12:52:02.811796 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'})  2025-11-01 12:52:02.811810 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'})  2025-11-01 12:52:02.811822 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:02.811834 | orchestrator | 2025-11-01 12:52:02.811846 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-11-01 12:52:02.811858 | orchestrator | Saturday 01 November 2025 12:51:58 +0000 (0:00:00.186) 0:00:43.751 ***** 2025-11-01 12:52:02.811871 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:52:02.811883 | orchestrator | 2025-11-01 12:52:02.811895 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-11-01 12:52:02.811907 | orchestrator | Saturday 01 November 2025 12:51:58 +0000 (0:00:00.160) 0:00:43.912 ***** 2025-11-01 12:52:02.811926 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'})  2025-11-01 12:52:02.811940 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'})  2025-11-01 12:52:02.811952 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:02.811964 | orchestrator | 2025-11-01 12:52:02.811976 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-11-01 12:52:02.811988 | orchestrator | Saturday 01 November 2025 12:51:58 +0000 (0:00:00.167) 0:00:44.079 ***** 2025-11-01 12:52:02.812000 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'})  2025-11-01 12:52:02.812012 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'})  2025-11-01 12:52:02.812024 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:02.812036 | orchestrator | 2025-11-01 12:52:02.812049 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-11-01 12:52:02.812061 | orchestrator | Saturday 01 November 2025 12:51:58 +0000 (0:00:00.198) 0:00:44.278 ***** 2025-11-01 12:52:02.812091 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'})  2025-11-01 12:52:02.812105 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'})  2025-11-01 12:52:02.812117 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:02.812129 | orchestrator | 2025-11-01 12:52:02.812142 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-11-01 12:52:02.812154 | orchestrator | Saturday 01 November 2025 12:51:58 +0000 (0:00:00.172) 0:00:44.451 ***** 2025-11-01 12:52:02.812165 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:02.812176 | orchestrator | 2025-11-01 12:52:02.812210 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-11-01 12:52:02.812222 | orchestrator | Saturday 01 November 2025 12:51:58 +0000 (0:00:00.158) 0:00:44.610 ***** 2025-11-01 12:52:02.812232 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:02.812243 | orchestrator | 2025-11-01 12:52:02.812254 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-11-01 12:52:02.812264 | orchestrator | Saturday 01 November 2025 12:51:59 +0000 (0:00:00.148) 0:00:44.759 ***** 2025-11-01 12:52:02.812275 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:02.812286 | orchestrator | 2025-11-01 12:52:02.812297 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-11-01 12:52:02.812307 | orchestrator | Saturday 01 November 2025 12:51:59 +0000 (0:00:00.160) 0:00:44.919 ***** 2025-11-01 12:52:02.812318 | orchestrator | ok: [testbed-node-4] => { 2025-11-01 12:52:02.812329 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-11-01 12:52:02.812340 | orchestrator | } 2025-11-01 12:52:02.812351 | orchestrator | 2025-11-01 12:52:02.812362 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-11-01 12:52:02.812372 | orchestrator | Saturday 01 November 2025 12:51:59 +0000 (0:00:00.151) 0:00:45.071 ***** 2025-11-01 12:52:02.812383 | orchestrator | ok: [testbed-node-4] => { 2025-11-01 12:52:02.812394 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-11-01 12:52:02.812404 | orchestrator | } 2025-11-01 12:52:02.812415 | orchestrator | 2025-11-01 12:52:02.812426 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-11-01 12:52:02.812436 | orchestrator | Saturday 01 November 2025 12:51:59 +0000 (0:00:00.147) 0:00:45.218 ***** 2025-11-01 12:52:02.812447 | orchestrator | ok: [testbed-node-4] => { 2025-11-01 12:52:02.812458 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-11-01 12:52:02.812476 | orchestrator | } 2025-11-01 12:52:02.812487 | orchestrator | 2025-11-01 12:52:02.812497 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-11-01 12:52:02.812508 | orchestrator | Saturday 01 November 2025 12:51:59 +0000 (0:00:00.375) 0:00:45.594 ***** 2025-11-01 12:52:02.812519 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:52:02.812530 | orchestrator | 2025-11-01 12:52:02.812541 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-11-01 12:52:02.812551 | orchestrator | Saturday 01 November 2025 12:52:00 +0000 (0:00:00.543) 0:00:46.137 ***** 2025-11-01 12:52:02.812575 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:52:02.812586 | orchestrator | 2025-11-01 12:52:02.812597 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-11-01 12:52:02.812608 | orchestrator | Saturday 01 November 2025 12:52:01 +0000 (0:00:00.655) 0:00:46.793 ***** 2025-11-01 12:52:02.812619 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:52:02.812629 | orchestrator | 2025-11-01 12:52:02.812640 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-11-01 12:52:02.812650 | orchestrator | Saturday 01 November 2025 12:52:01 +0000 (0:00:00.549) 0:00:47.343 ***** 2025-11-01 12:52:02.812661 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:52:02.812672 | orchestrator | 2025-11-01 12:52:02.812682 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-11-01 12:52:02.812693 | orchestrator | Saturday 01 November 2025 12:52:01 +0000 (0:00:00.170) 0:00:47.514 ***** 2025-11-01 12:52:02.812704 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:02.812715 | orchestrator | 2025-11-01 12:52:02.812725 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-11-01 12:52:02.812736 | orchestrator | Saturday 01 November 2025 12:52:01 +0000 (0:00:00.134) 0:00:47.649 ***** 2025-11-01 12:52:02.812747 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:02.812757 | orchestrator | 2025-11-01 12:52:02.812768 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-11-01 12:52:02.812779 | orchestrator | Saturday 01 November 2025 12:52:02 +0000 (0:00:00.113) 0:00:47.762 ***** 2025-11-01 12:52:02.812789 | orchestrator | ok: [testbed-node-4] => { 2025-11-01 12:52:02.812800 | orchestrator |  "vgs_report": { 2025-11-01 12:52:02.812811 | orchestrator |  "vg": [] 2025-11-01 12:52:02.812822 | orchestrator |  } 2025-11-01 12:52:02.812833 | orchestrator | } 2025-11-01 12:52:02.812843 | orchestrator | 2025-11-01 12:52:02.812854 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-11-01 12:52:02.812865 | orchestrator | Saturday 01 November 2025 12:52:02 +0000 (0:00:00.161) 0:00:47.924 ***** 2025-11-01 12:52:02.812875 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:02.812886 | orchestrator | 2025-11-01 12:52:02.812897 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-11-01 12:52:02.812908 | orchestrator | Saturday 01 November 2025 12:52:02 +0000 (0:00:00.155) 0:00:48.080 ***** 2025-11-01 12:52:02.812918 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:02.812929 | orchestrator | 2025-11-01 12:52:02.812940 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-11-01 12:52:02.812951 | orchestrator | Saturday 01 November 2025 12:52:02 +0000 (0:00:00.143) 0:00:48.224 ***** 2025-11-01 12:52:02.812962 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:02.812972 | orchestrator | 2025-11-01 12:52:02.812983 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-11-01 12:52:02.812994 | orchestrator | Saturday 01 November 2025 12:52:02 +0000 (0:00:00.150) 0:00:48.374 ***** 2025-11-01 12:52:02.813005 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:02.813016 | orchestrator | 2025-11-01 12:52:02.813027 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-11-01 12:52:02.813045 | orchestrator | Saturday 01 November 2025 12:52:02 +0000 (0:00:00.131) 0:00:48.506 ***** 2025-11-01 12:52:08.118436 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:08.118544 | orchestrator | 2025-11-01 12:52:08.118583 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-11-01 12:52:08.118597 | orchestrator | Saturday 01 November 2025 12:52:03 +0000 (0:00:00.403) 0:00:48.909 ***** 2025-11-01 12:52:08.118609 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:08.118620 | orchestrator | 2025-11-01 12:52:08.118631 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-11-01 12:52:08.118643 | orchestrator | Saturday 01 November 2025 12:52:03 +0000 (0:00:00.181) 0:00:49.091 ***** 2025-11-01 12:52:08.118654 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:08.118665 | orchestrator | 2025-11-01 12:52:08.118676 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-11-01 12:52:08.118687 | orchestrator | Saturday 01 November 2025 12:52:03 +0000 (0:00:00.147) 0:00:49.238 ***** 2025-11-01 12:52:08.118698 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:08.118709 | orchestrator | 2025-11-01 12:52:08.118720 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-11-01 12:52:08.118731 | orchestrator | Saturday 01 November 2025 12:52:03 +0000 (0:00:00.189) 0:00:49.428 ***** 2025-11-01 12:52:08.118742 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:08.118753 | orchestrator | 2025-11-01 12:52:08.118764 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-11-01 12:52:08.118775 | orchestrator | Saturday 01 November 2025 12:52:03 +0000 (0:00:00.170) 0:00:49.598 ***** 2025-11-01 12:52:08.118786 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:08.118797 | orchestrator | 2025-11-01 12:52:08.118809 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-11-01 12:52:08.118819 | orchestrator | Saturday 01 November 2025 12:52:04 +0000 (0:00:00.189) 0:00:49.787 ***** 2025-11-01 12:52:08.118830 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:08.118841 | orchestrator | 2025-11-01 12:52:08.118852 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-11-01 12:52:08.118863 | orchestrator | Saturday 01 November 2025 12:52:04 +0000 (0:00:00.208) 0:00:49.996 ***** 2025-11-01 12:52:08.118874 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:08.118886 | orchestrator | 2025-11-01 12:52:08.118897 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-11-01 12:52:08.118908 | orchestrator | Saturday 01 November 2025 12:52:04 +0000 (0:00:00.167) 0:00:50.164 ***** 2025-11-01 12:52:08.118918 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:08.118930 | orchestrator | 2025-11-01 12:52:08.118941 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-11-01 12:52:08.118951 | orchestrator | Saturday 01 November 2025 12:52:04 +0000 (0:00:00.160) 0:00:50.325 ***** 2025-11-01 12:52:08.118963 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:08.118976 | orchestrator | 2025-11-01 12:52:08.118989 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-11-01 12:52:08.119002 | orchestrator | Saturday 01 November 2025 12:52:04 +0000 (0:00:00.138) 0:00:50.463 ***** 2025-11-01 12:52:08.119030 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'})  2025-11-01 12:52:08.119045 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'})  2025-11-01 12:52:08.119059 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:08.119072 | orchestrator | 2025-11-01 12:52:08.119085 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-11-01 12:52:08.119098 | orchestrator | Saturday 01 November 2025 12:52:04 +0000 (0:00:00.180) 0:00:50.644 ***** 2025-11-01 12:52:08.119111 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'})  2025-11-01 12:52:08.119124 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'})  2025-11-01 12:52:08.119143 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:08.119156 | orchestrator | 2025-11-01 12:52:08.119168 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-11-01 12:52:08.119181 | orchestrator | Saturday 01 November 2025 12:52:05 +0000 (0:00:00.171) 0:00:50.815 ***** 2025-11-01 12:52:08.119218 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'})  2025-11-01 12:52:08.119231 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'})  2025-11-01 12:52:08.119243 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:08.119256 | orchestrator | 2025-11-01 12:52:08.119269 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-11-01 12:52:08.119281 | orchestrator | Saturday 01 November 2025 12:52:05 +0000 (0:00:00.391) 0:00:51.207 ***** 2025-11-01 12:52:08.119294 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'})  2025-11-01 12:52:08.119307 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'})  2025-11-01 12:52:08.119321 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:08.119332 | orchestrator | 2025-11-01 12:52:08.119343 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-11-01 12:52:08.119371 | orchestrator | Saturday 01 November 2025 12:52:05 +0000 (0:00:00.170) 0:00:51.378 ***** 2025-11-01 12:52:08.119383 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'})  2025-11-01 12:52:08.119394 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'})  2025-11-01 12:52:08.119405 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:08.119415 | orchestrator | 2025-11-01 12:52:08.119426 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-11-01 12:52:08.119437 | orchestrator | Saturday 01 November 2025 12:52:05 +0000 (0:00:00.187) 0:00:51.566 ***** 2025-11-01 12:52:08.119448 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'})  2025-11-01 12:52:08.119459 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'})  2025-11-01 12:52:08.119470 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:08.119481 | orchestrator | 2025-11-01 12:52:08.119492 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-11-01 12:52:08.119503 | orchestrator | Saturday 01 November 2025 12:52:06 +0000 (0:00:00.157) 0:00:51.723 ***** 2025-11-01 12:52:08.119514 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'})  2025-11-01 12:52:08.119525 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'})  2025-11-01 12:52:08.119536 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:08.119547 | orchestrator | 2025-11-01 12:52:08.119558 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-11-01 12:52:08.119569 | orchestrator | Saturday 01 November 2025 12:52:06 +0000 (0:00:00.170) 0:00:51.894 ***** 2025-11-01 12:52:08.119579 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'})  2025-11-01 12:52:08.119597 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'})  2025-11-01 12:52:08.119609 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:08.119619 | orchestrator | 2025-11-01 12:52:08.119631 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-11-01 12:52:08.119678 | orchestrator | Saturday 01 November 2025 12:52:06 +0000 (0:00:00.167) 0:00:52.061 ***** 2025-11-01 12:52:08.119690 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:52:08.119701 | orchestrator | 2025-11-01 12:52:08.119712 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-11-01 12:52:08.119723 | orchestrator | Saturday 01 November 2025 12:52:06 +0000 (0:00:00.527) 0:00:52.589 ***** 2025-11-01 12:52:08.119734 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:52:08.119745 | orchestrator | 2025-11-01 12:52:08.119755 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-11-01 12:52:08.119766 | orchestrator | Saturday 01 November 2025 12:52:07 +0000 (0:00:00.519) 0:00:53.109 ***** 2025-11-01 12:52:08.119777 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:52:08.119788 | orchestrator | 2025-11-01 12:52:08.119799 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-11-01 12:52:08.119810 | orchestrator | Saturday 01 November 2025 12:52:07 +0000 (0:00:00.176) 0:00:53.285 ***** 2025-11-01 12:52:08.119821 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'vg_name': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'}) 2025-11-01 12:52:08.119833 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'vg_name': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'}) 2025-11-01 12:52:08.119844 | orchestrator | 2025-11-01 12:52:08.119855 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-11-01 12:52:08.119865 | orchestrator | Saturday 01 November 2025 12:52:07 +0000 (0:00:00.186) 0:00:53.472 ***** 2025-11-01 12:52:08.119876 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'})  2025-11-01 12:52:08.119887 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'})  2025-11-01 12:52:08.119898 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:08.119909 | orchestrator | 2025-11-01 12:52:08.119920 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-11-01 12:52:08.119930 | orchestrator | Saturday 01 November 2025 12:52:07 +0000 (0:00:00.174) 0:00:53.647 ***** 2025-11-01 12:52:08.119941 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'})  2025-11-01 12:52:08.119952 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'})  2025-11-01 12:52:08.119970 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:14.709368 | orchestrator | 2025-11-01 12:52:14.709472 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-11-01 12:52:14.709489 | orchestrator | Saturday 01 November 2025 12:52:08 +0000 (0:00:00.170) 0:00:53.817 ***** 2025-11-01 12:52:14.709502 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'})  2025-11-01 12:52:14.709515 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'})  2025-11-01 12:52:14.709526 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:52:14.709538 | orchestrator | 2025-11-01 12:52:14.709549 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-11-01 12:52:14.709560 | orchestrator | Saturday 01 November 2025 12:52:08 +0000 (0:00:00.194) 0:00:54.012 ***** 2025-11-01 12:52:14.709593 | orchestrator | ok: [testbed-node-4] => { 2025-11-01 12:52:14.709605 | orchestrator |  "lvm_report": { 2025-11-01 12:52:14.709617 | orchestrator |  "lv": [ 2025-11-01 12:52:14.709628 | orchestrator |  { 2025-11-01 12:52:14.709639 | orchestrator |  "lv_name": "osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1", 2025-11-01 12:52:14.709650 | orchestrator |  "vg_name": "ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1" 2025-11-01 12:52:14.709661 | orchestrator |  }, 2025-11-01 12:52:14.709671 | orchestrator |  { 2025-11-01 12:52:14.709682 | orchestrator |  "lv_name": "osd-block-780930f3-bf13-5252-a15a-5f9f469ca774", 2025-11-01 12:52:14.709693 | orchestrator |  "vg_name": "ceph-780930f3-bf13-5252-a15a-5f9f469ca774" 2025-11-01 12:52:14.709704 | orchestrator |  } 2025-11-01 12:52:14.709714 | orchestrator |  ], 2025-11-01 12:52:14.709725 | orchestrator |  "pv": [ 2025-11-01 12:52:14.709735 | orchestrator |  { 2025-11-01 12:52:14.709746 | orchestrator |  "pv_name": "/dev/sdb", 2025-11-01 12:52:14.709757 | orchestrator |  "vg_name": "ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1" 2025-11-01 12:52:14.709768 | orchestrator |  }, 2025-11-01 12:52:14.709778 | orchestrator |  { 2025-11-01 12:52:14.709789 | orchestrator |  "pv_name": "/dev/sdc", 2025-11-01 12:52:14.709800 | orchestrator |  "vg_name": "ceph-780930f3-bf13-5252-a15a-5f9f469ca774" 2025-11-01 12:52:14.709810 | orchestrator |  } 2025-11-01 12:52:14.709821 | orchestrator |  ] 2025-11-01 12:52:14.709831 | orchestrator |  } 2025-11-01 12:52:14.709842 | orchestrator | } 2025-11-01 12:52:14.709853 | orchestrator | 2025-11-01 12:52:14.709864 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-11-01 12:52:14.709875 | orchestrator | 2025-11-01 12:52:14.709886 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-01 12:52:14.709896 | orchestrator | Saturday 01 November 2025 12:52:08 +0000 (0:00:00.539) 0:00:54.552 ***** 2025-11-01 12:52:14.709907 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-11-01 12:52:14.709918 | orchestrator | 2025-11-01 12:52:14.709943 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-11-01 12:52:14.709954 | orchestrator | Saturday 01 November 2025 12:52:09 +0000 (0:00:00.270) 0:00:54.823 ***** 2025-11-01 12:52:14.709965 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:52:14.709977 | orchestrator | 2025-11-01 12:52:14.709987 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:52:14.709998 | orchestrator | Saturday 01 November 2025 12:52:09 +0000 (0:00:00.240) 0:00:55.064 ***** 2025-11-01 12:52:14.710010 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-11-01 12:52:14.710077 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-11-01 12:52:14.710088 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-11-01 12:52:14.710100 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-11-01 12:52:14.710110 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-11-01 12:52:14.710121 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-11-01 12:52:14.710132 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-11-01 12:52:14.710143 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-11-01 12:52:14.710153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-11-01 12:52:14.710164 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-11-01 12:52:14.710175 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-11-01 12:52:14.710217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-11-01 12:52:14.710229 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-11-01 12:52:14.710240 | orchestrator | 2025-11-01 12:52:14.710251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:52:14.710261 | orchestrator | Saturday 01 November 2025 12:52:09 +0000 (0:00:00.442) 0:00:55.506 ***** 2025-11-01 12:52:14.710272 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:14.710287 | orchestrator | 2025-11-01 12:52:14.710299 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:52:14.710310 | orchestrator | Saturday 01 November 2025 12:52:10 +0000 (0:00:00.226) 0:00:55.733 ***** 2025-11-01 12:52:14.710320 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:14.710331 | orchestrator | 2025-11-01 12:52:14.710342 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:52:14.710372 | orchestrator | Saturday 01 November 2025 12:52:10 +0000 (0:00:00.233) 0:00:55.966 ***** 2025-11-01 12:52:14.710384 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:14.710394 | orchestrator | 2025-11-01 12:52:14.710405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:52:14.710416 | orchestrator | Saturday 01 November 2025 12:52:10 +0000 (0:00:00.210) 0:00:56.176 ***** 2025-11-01 12:52:14.710427 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:14.710438 | orchestrator | 2025-11-01 12:52:14.710449 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:52:14.710460 | orchestrator | Saturday 01 November 2025 12:52:10 +0000 (0:00:00.201) 0:00:56.378 ***** 2025-11-01 12:52:14.710470 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:14.710481 | orchestrator | 2025-11-01 12:52:14.710492 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:52:14.710503 | orchestrator | Saturday 01 November 2025 12:52:11 +0000 (0:00:00.701) 0:00:57.080 ***** 2025-11-01 12:52:14.710514 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:14.710525 | orchestrator | 2025-11-01 12:52:14.710535 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:52:14.710546 | orchestrator | Saturday 01 November 2025 12:52:11 +0000 (0:00:00.192) 0:00:57.273 ***** 2025-11-01 12:52:14.710557 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:14.710568 | orchestrator | 2025-11-01 12:52:14.710579 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:52:14.710589 | orchestrator | Saturday 01 November 2025 12:52:11 +0000 (0:00:00.211) 0:00:57.484 ***** 2025-11-01 12:52:14.710600 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:14.710611 | orchestrator | 2025-11-01 12:52:14.710622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:52:14.710633 | orchestrator | Saturday 01 November 2025 12:52:11 +0000 (0:00:00.223) 0:00:57.708 ***** 2025-11-01 12:52:14.710644 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a) 2025-11-01 12:52:14.710656 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a) 2025-11-01 12:52:14.710667 | orchestrator | 2025-11-01 12:52:14.710678 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:52:14.710688 | orchestrator | Saturday 01 November 2025 12:52:12 +0000 (0:00:00.482) 0:00:58.190 ***** 2025-11-01 12:52:14.710699 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7bcb822d-7f3b-4eca-ac83-a3c6a6b727aa) 2025-11-01 12:52:14.710710 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7bcb822d-7f3b-4eca-ac83-a3c6a6b727aa) 2025-11-01 12:52:14.710721 | orchestrator | 2025-11-01 12:52:14.710732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:52:14.710743 | orchestrator | Saturday 01 November 2025 12:52:12 +0000 (0:00:00.437) 0:00:58.628 ***** 2025-11-01 12:52:14.710766 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bacac2a1-f096-4371-9863-988edf40b0d8) 2025-11-01 12:52:14.710778 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bacac2a1-f096-4371-9863-988edf40b0d8) 2025-11-01 12:52:14.710789 | orchestrator | 2025-11-01 12:52:14.710800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:52:14.710811 | orchestrator | Saturday 01 November 2025 12:52:13 +0000 (0:00:00.476) 0:00:59.104 ***** 2025-11-01 12:52:14.710821 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_79b0442c-a1d2-4926-aa81-9c91c373f6dc) 2025-11-01 12:52:14.710832 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_79b0442c-a1d2-4926-aa81-9c91c373f6dc) 2025-11-01 12:52:14.710843 | orchestrator | 2025-11-01 12:52:14.710854 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-11-01 12:52:14.710865 | orchestrator | Saturday 01 November 2025 12:52:13 +0000 (0:00:00.492) 0:00:59.597 ***** 2025-11-01 12:52:14.710876 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-11-01 12:52:14.710887 | orchestrator | 2025-11-01 12:52:14.710898 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:52:14.710908 | orchestrator | Saturday 01 November 2025 12:52:14 +0000 (0:00:00.360) 0:00:59.957 ***** 2025-11-01 12:52:14.710919 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-11-01 12:52:14.710930 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-11-01 12:52:14.710940 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-11-01 12:52:14.710951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-11-01 12:52:14.710962 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-11-01 12:52:14.710972 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-11-01 12:52:14.710983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-11-01 12:52:14.710994 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-11-01 12:52:14.711005 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-11-01 12:52:14.711016 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-11-01 12:52:14.711026 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-11-01 12:52:14.711044 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-11-01 12:52:24.147099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-11-01 12:52:24.147242 | orchestrator | 2025-11-01 12:52:24.147260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:52:24.147272 | orchestrator | Saturday 01 November 2025 12:52:14 +0000 (0:00:00.450) 0:01:00.407 ***** 2025-11-01 12:52:24.147283 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.147296 | orchestrator | 2025-11-01 12:52:24.147307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:52:24.147318 | orchestrator | Saturday 01 November 2025 12:52:14 +0000 (0:00:00.209) 0:01:00.617 ***** 2025-11-01 12:52:24.147328 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.147339 | orchestrator | 2025-11-01 12:52:24.147350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:52:24.147361 | orchestrator | Saturday 01 November 2025 12:52:15 +0000 (0:00:00.756) 0:01:01.374 ***** 2025-11-01 12:52:24.147372 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.147382 | orchestrator | 2025-11-01 12:52:24.147393 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:52:24.147427 | orchestrator | Saturday 01 November 2025 12:52:15 +0000 (0:00:00.236) 0:01:01.611 ***** 2025-11-01 12:52:24.147439 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.147450 | orchestrator | 2025-11-01 12:52:24.147460 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:52:24.147471 | orchestrator | Saturday 01 November 2025 12:52:16 +0000 (0:00:00.197) 0:01:01.808 ***** 2025-11-01 12:52:24.147482 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.147492 | orchestrator | 2025-11-01 12:52:24.147503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:52:24.147514 | orchestrator | Saturday 01 November 2025 12:52:16 +0000 (0:00:00.209) 0:01:02.018 ***** 2025-11-01 12:52:24.147524 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.147535 | orchestrator | 2025-11-01 12:52:24.147546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:52:24.147556 | orchestrator | Saturday 01 November 2025 12:52:16 +0000 (0:00:00.214) 0:01:02.233 ***** 2025-11-01 12:52:24.147567 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.147577 | orchestrator | 2025-11-01 12:52:24.147588 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:52:24.147599 | orchestrator | Saturday 01 November 2025 12:52:16 +0000 (0:00:00.208) 0:01:02.442 ***** 2025-11-01 12:52:24.147609 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.147620 | orchestrator | 2025-11-01 12:52:24.147632 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:52:24.147644 | orchestrator | Saturday 01 November 2025 12:52:16 +0000 (0:00:00.202) 0:01:02.644 ***** 2025-11-01 12:52:24.147656 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-11-01 12:52:24.147669 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-11-01 12:52:24.147681 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-11-01 12:52:24.147693 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-11-01 12:52:24.147705 | orchestrator | 2025-11-01 12:52:24.147717 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:52:24.147729 | orchestrator | Saturday 01 November 2025 12:52:17 +0000 (0:00:00.691) 0:01:03.335 ***** 2025-11-01 12:52:24.147741 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.147753 | orchestrator | 2025-11-01 12:52:24.147765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:52:24.147778 | orchestrator | Saturday 01 November 2025 12:52:17 +0000 (0:00:00.202) 0:01:03.537 ***** 2025-11-01 12:52:24.147790 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.147801 | orchestrator | 2025-11-01 12:52:24.147814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:52:24.147827 | orchestrator | Saturday 01 November 2025 12:52:18 +0000 (0:00:00.226) 0:01:03.764 ***** 2025-11-01 12:52:24.147838 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.147850 | orchestrator | 2025-11-01 12:52:24.147862 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-11-01 12:52:24.147874 | orchestrator | Saturday 01 November 2025 12:52:18 +0000 (0:00:00.207) 0:01:03.971 ***** 2025-11-01 12:52:24.147886 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.147898 | orchestrator | 2025-11-01 12:52:24.147910 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-11-01 12:52:24.147922 | orchestrator | Saturday 01 November 2025 12:52:18 +0000 (0:00:00.233) 0:01:04.205 ***** 2025-11-01 12:52:24.147934 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.147946 | orchestrator | 2025-11-01 12:52:24.147958 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-11-01 12:52:24.147970 | orchestrator | Saturday 01 November 2025 12:52:18 +0000 (0:00:00.368) 0:01:04.574 ***** 2025-11-01 12:52:24.147982 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fea132eb-9454-553c-8b4e-faa263198857'}}) 2025-11-01 12:52:24.147995 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e995aa1-0e3d-5a0e-8d57-e00715a81a73'}}) 2025-11-01 12:52:24.148013 | orchestrator | 2025-11-01 12:52:24.148024 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-11-01 12:52:24.148035 | orchestrator | Saturday 01 November 2025 12:52:19 +0000 (0:00:00.191) 0:01:04.765 ***** 2025-11-01 12:52:24.148046 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'}) 2025-11-01 12:52:24.148057 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'}) 2025-11-01 12:52:24.148068 | orchestrator | 2025-11-01 12:52:24.148079 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-11-01 12:52:24.148106 | orchestrator | Saturday 01 November 2025 12:52:20 +0000 (0:00:01.912) 0:01:06.678 ***** 2025-11-01 12:52:24.148117 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'})  2025-11-01 12:52:24.148129 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'})  2025-11-01 12:52:24.148140 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.148151 | orchestrator | 2025-11-01 12:52:24.148162 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-11-01 12:52:24.148172 | orchestrator | Saturday 01 November 2025 12:52:21 +0000 (0:00:00.163) 0:01:06.841 ***** 2025-11-01 12:52:24.148183 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'}) 2025-11-01 12:52:24.148229 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'}) 2025-11-01 12:52:24.148242 | orchestrator | 2025-11-01 12:52:24.148253 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-11-01 12:52:24.148264 | orchestrator | Saturday 01 November 2025 12:52:22 +0000 (0:00:01.325) 0:01:08.166 ***** 2025-11-01 12:52:24.148275 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'})  2025-11-01 12:52:24.148286 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'})  2025-11-01 12:52:24.148297 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.148307 | orchestrator | 2025-11-01 12:52:24.148318 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-11-01 12:52:24.148329 | orchestrator | Saturday 01 November 2025 12:52:22 +0000 (0:00:00.175) 0:01:08.342 ***** 2025-11-01 12:52:24.148340 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.148350 | orchestrator | 2025-11-01 12:52:24.148361 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-11-01 12:52:24.148372 | orchestrator | Saturday 01 November 2025 12:52:22 +0000 (0:00:00.159) 0:01:08.501 ***** 2025-11-01 12:52:24.148383 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'})  2025-11-01 12:52:24.148400 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'})  2025-11-01 12:52:24.148411 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.148422 | orchestrator | 2025-11-01 12:52:24.148433 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-11-01 12:52:24.148444 | orchestrator | Saturday 01 November 2025 12:52:22 +0000 (0:00:00.165) 0:01:08.667 ***** 2025-11-01 12:52:24.148454 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.148472 | orchestrator | 2025-11-01 12:52:24.148483 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-11-01 12:52:24.148494 | orchestrator | Saturday 01 November 2025 12:52:23 +0000 (0:00:00.138) 0:01:08.805 ***** 2025-11-01 12:52:24.148505 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'})  2025-11-01 12:52:24.148516 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'})  2025-11-01 12:52:24.148526 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.148537 | orchestrator | 2025-11-01 12:52:24.148548 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-11-01 12:52:24.148559 | orchestrator | Saturday 01 November 2025 12:52:23 +0000 (0:00:00.172) 0:01:08.977 ***** 2025-11-01 12:52:24.148569 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.148580 | orchestrator | 2025-11-01 12:52:24.148591 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-11-01 12:52:24.148601 | orchestrator | Saturday 01 November 2025 12:52:23 +0000 (0:00:00.158) 0:01:09.136 ***** 2025-11-01 12:52:24.148612 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'})  2025-11-01 12:52:24.148623 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'})  2025-11-01 12:52:24.148634 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:24.148645 | orchestrator | 2025-11-01 12:52:24.148655 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-11-01 12:52:24.148666 | orchestrator | Saturday 01 November 2025 12:52:23 +0000 (0:00:00.155) 0:01:09.291 ***** 2025-11-01 12:52:24.148677 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:52:24.148688 | orchestrator | 2025-11-01 12:52:24.148698 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-11-01 12:52:24.148709 | orchestrator | Saturday 01 November 2025 12:52:23 +0000 (0:00:00.374) 0:01:09.666 ***** 2025-11-01 12:52:24.148727 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'})  2025-11-01 12:52:30.467091 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'})  2025-11-01 12:52:30.467239 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.467257 | orchestrator | 2025-11-01 12:52:30.467269 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-11-01 12:52:30.467282 | orchestrator | Saturday 01 November 2025 12:52:24 +0000 (0:00:00.180) 0:01:09.847 ***** 2025-11-01 12:52:30.467293 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'})  2025-11-01 12:52:30.467305 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'})  2025-11-01 12:52:30.467316 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.467328 | orchestrator | 2025-11-01 12:52:30.467339 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-11-01 12:52:30.467350 | orchestrator | Saturday 01 November 2025 12:52:24 +0000 (0:00:00.167) 0:01:10.014 ***** 2025-11-01 12:52:30.467361 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'})  2025-11-01 12:52:30.467372 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'})  2025-11-01 12:52:30.467382 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.467417 | orchestrator | 2025-11-01 12:52:30.467429 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-11-01 12:52:30.467440 | orchestrator | Saturday 01 November 2025 12:52:24 +0000 (0:00:00.175) 0:01:10.190 ***** 2025-11-01 12:52:30.467451 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.467462 | orchestrator | 2025-11-01 12:52:30.467472 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-11-01 12:52:30.467483 | orchestrator | Saturday 01 November 2025 12:52:24 +0000 (0:00:00.150) 0:01:10.340 ***** 2025-11-01 12:52:30.467494 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.467505 | orchestrator | 2025-11-01 12:52:30.467516 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-11-01 12:52:30.467526 | orchestrator | Saturday 01 November 2025 12:52:24 +0000 (0:00:00.144) 0:01:10.485 ***** 2025-11-01 12:52:30.467537 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.467548 | orchestrator | 2025-11-01 12:52:30.467559 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-11-01 12:52:30.467584 | orchestrator | Saturday 01 November 2025 12:52:24 +0000 (0:00:00.135) 0:01:10.621 ***** 2025-11-01 12:52:30.467595 | orchestrator | ok: [testbed-node-5] => { 2025-11-01 12:52:30.467607 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-11-01 12:52:30.467618 | orchestrator | } 2025-11-01 12:52:30.467629 | orchestrator | 2025-11-01 12:52:30.467642 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-11-01 12:52:30.467654 | orchestrator | Saturday 01 November 2025 12:52:25 +0000 (0:00:00.155) 0:01:10.777 ***** 2025-11-01 12:52:30.467666 | orchestrator | ok: [testbed-node-5] => { 2025-11-01 12:52:30.467679 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-11-01 12:52:30.467690 | orchestrator | } 2025-11-01 12:52:30.467702 | orchestrator | 2025-11-01 12:52:30.467714 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-11-01 12:52:30.467728 | orchestrator | Saturday 01 November 2025 12:52:25 +0000 (0:00:00.139) 0:01:10.916 ***** 2025-11-01 12:52:30.467740 | orchestrator | ok: [testbed-node-5] => { 2025-11-01 12:52:30.467752 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-11-01 12:52:30.467765 | orchestrator | } 2025-11-01 12:52:30.467777 | orchestrator | 2025-11-01 12:52:30.467789 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-11-01 12:52:30.467801 | orchestrator | Saturday 01 November 2025 12:52:25 +0000 (0:00:00.146) 0:01:11.062 ***** 2025-11-01 12:52:30.467813 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:52:30.467826 | orchestrator | 2025-11-01 12:52:30.467837 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-11-01 12:52:30.467850 | orchestrator | Saturday 01 November 2025 12:52:25 +0000 (0:00:00.509) 0:01:11.572 ***** 2025-11-01 12:52:30.467861 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:52:30.467874 | orchestrator | 2025-11-01 12:52:30.467886 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-11-01 12:52:30.467898 | orchestrator | Saturday 01 November 2025 12:52:26 +0000 (0:00:00.530) 0:01:12.102 ***** 2025-11-01 12:52:30.467910 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:52:30.467923 | orchestrator | 2025-11-01 12:52:30.467935 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-11-01 12:52:30.467947 | orchestrator | Saturday 01 November 2025 12:52:27 +0000 (0:00:00.738) 0:01:12.840 ***** 2025-11-01 12:52:30.467960 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:52:30.467972 | orchestrator | 2025-11-01 12:52:30.467984 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-11-01 12:52:30.467995 | orchestrator | Saturday 01 November 2025 12:52:27 +0000 (0:00:00.150) 0:01:12.990 ***** 2025-11-01 12:52:30.468006 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.468017 | orchestrator | 2025-11-01 12:52:30.468027 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-11-01 12:52:30.468038 | orchestrator | Saturday 01 November 2025 12:52:27 +0000 (0:00:00.128) 0:01:13.119 ***** 2025-11-01 12:52:30.468056 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.468067 | orchestrator | 2025-11-01 12:52:30.468078 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-11-01 12:52:30.468088 | orchestrator | Saturday 01 November 2025 12:52:27 +0000 (0:00:00.118) 0:01:13.237 ***** 2025-11-01 12:52:30.468099 | orchestrator | ok: [testbed-node-5] => { 2025-11-01 12:52:30.468110 | orchestrator |  "vgs_report": { 2025-11-01 12:52:30.468121 | orchestrator |  "vg": [] 2025-11-01 12:52:30.468148 | orchestrator |  } 2025-11-01 12:52:30.468160 | orchestrator | } 2025-11-01 12:52:30.468171 | orchestrator | 2025-11-01 12:52:30.468182 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-11-01 12:52:30.468212 | orchestrator | Saturday 01 November 2025 12:52:27 +0000 (0:00:00.156) 0:01:13.393 ***** 2025-11-01 12:52:30.468223 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.468234 | orchestrator | 2025-11-01 12:52:30.468245 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-11-01 12:52:30.468256 | orchestrator | Saturday 01 November 2025 12:52:27 +0000 (0:00:00.159) 0:01:13.553 ***** 2025-11-01 12:52:30.468267 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.468277 | orchestrator | 2025-11-01 12:52:30.468288 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-11-01 12:52:30.468299 | orchestrator | Saturday 01 November 2025 12:52:27 +0000 (0:00:00.153) 0:01:13.707 ***** 2025-11-01 12:52:30.468310 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.468321 | orchestrator | 2025-11-01 12:52:30.468332 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-11-01 12:52:30.468343 | orchestrator | Saturday 01 November 2025 12:52:28 +0000 (0:00:00.157) 0:01:13.865 ***** 2025-11-01 12:52:30.468354 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.468365 | orchestrator | 2025-11-01 12:52:30.468376 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-11-01 12:52:30.468386 | orchestrator | Saturday 01 November 2025 12:52:28 +0000 (0:00:00.145) 0:01:14.010 ***** 2025-11-01 12:52:30.468397 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.468408 | orchestrator | 2025-11-01 12:52:30.468419 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-11-01 12:52:30.468430 | orchestrator | Saturday 01 November 2025 12:52:28 +0000 (0:00:00.149) 0:01:14.160 ***** 2025-11-01 12:52:30.468441 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.468452 | orchestrator | 2025-11-01 12:52:30.468462 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-11-01 12:52:30.468473 | orchestrator | Saturday 01 November 2025 12:52:28 +0000 (0:00:00.128) 0:01:14.288 ***** 2025-11-01 12:52:30.468484 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.468495 | orchestrator | 2025-11-01 12:52:30.468506 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-11-01 12:52:30.468516 | orchestrator | Saturday 01 November 2025 12:52:28 +0000 (0:00:00.144) 0:01:14.432 ***** 2025-11-01 12:52:30.468527 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.468538 | orchestrator | 2025-11-01 12:52:30.468549 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-11-01 12:52:30.468560 | orchestrator | Saturday 01 November 2025 12:52:29 +0000 (0:00:00.393) 0:01:14.826 ***** 2025-11-01 12:52:30.468571 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.468581 | orchestrator | 2025-11-01 12:52:30.468592 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-11-01 12:52:30.468609 | orchestrator | Saturday 01 November 2025 12:52:29 +0000 (0:00:00.135) 0:01:14.961 ***** 2025-11-01 12:52:30.468620 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.468631 | orchestrator | 2025-11-01 12:52:30.468642 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-11-01 12:52:30.468653 | orchestrator | Saturday 01 November 2025 12:52:29 +0000 (0:00:00.146) 0:01:15.107 ***** 2025-11-01 12:52:30.468664 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.468682 | orchestrator | 2025-11-01 12:52:30.468693 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-11-01 12:52:30.468704 | orchestrator | Saturday 01 November 2025 12:52:29 +0000 (0:00:00.141) 0:01:15.249 ***** 2025-11-01 12:52:30.468715 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.468726 | orchestrator | 2025-11-01 12:52:30.468736 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-11-01 12:52:30.468747 | orchestrator | Saturday 01 November 2025 12:52:29 +0000 (0:00:00.141) 0:01:15.391 ***** 2025-11-01 12:52:30.468758 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.468769 | orchestrator | 2025-11-01 12:52:30.468780 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-11-01 12:52:30.468791 | orchestrator | Saturday 01 November 2025 12:52:29 +0000 (0:00:00.152) 0:01:15.544 ***** 2025-11-01 12:52:30.468802 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.468813 | orchestrator | 2025-11-01 12:52:30.468824 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-11-01 12:52:30.468834 | orchestrator | Saturday 01 November 2025 12:52:29 +0000 (0:00:00.127) 0:01:15.672 ***** 2025-11-01 12:52:30.468845 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'})  2025-11-01 12:52:30.468857 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'})  2025-11-01 12:52:30.468868 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.468878 | orchestrator | 2025-11-01 12:52:30.468889 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-11-01 12:52:30.468900 | orchestrator | Saturday 01 November 2025 12:52:30 +0000 (0:00:00.167) 0:01:15.840 ***** 2025-11-01 12:52:30.468911 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'})  2025-11-01 12:52:30.468922 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'})  2025-11-01 12:52:30.468933 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:30.468944 | orchestrator | 2025-11-01 12:52:30.468955 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-11-01 12:52:30.468966 | orchestrator | Saturday 01 November 2025 12:52:30 +0000 (0:00:00.154) 0:01:15.994 ***** 2025-11-01 12:52:30.468983 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'})  2025-11-01 12:52:33.647077 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'})  2025-11-01 12:52:33.647178 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:33.647242 | orchestrator | 2025-11-01 12:52:33.647256 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-11-01 12:52:33.647269 | orchestrator | Saturday 01 November 2025 12:52:30 +0000 (0:00:00.174) 0:01:16.169 ***** 2025-11-01 12:52:33.647280 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'})  2025-11-01 12:52:33.647292 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'})  2025-11-01 12:52:33.647303 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:33.647314 | orchestrator | 2025-11-01 12:52:33.647325 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-11-01 12:52:33.647336 | orchestrator | Saturday 01 November 2025 12:52:30 +0000 (0:00:00.158) 0:01:16.328 ***** 2025-11-01 12:52:33.647347 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'})  2025-11-01 12:52:33.647381 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'})  2025-11-01 12:52:33.647393 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:33.647404 | orchestrator | 2025-11-01 12:52:33.647415 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-11-01 12:52:33.647425 | orchestrator | Saturday 01 November 2025 12:52:30 +0000 (0:00:00.185) 0:01:16.514 ***** 2025-11-01 12:52:33.647436 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'})  2025-11-01 12:52:33.647447 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'})  2025-11-01 12:52:33.647458 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:33.647469 | orchestrator | 2025-11-01 12:52:33.647480 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-11-01 12:52:33.647491 | orchestrator | Saturday 01 November 2025 12:52:31 +0000 (0:00:00.417) 0:01:16.931 ***** 2025-11-01 12:52:33.647502 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'})  2025-11-01 12:52:33.647513 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'})  2025-11-01 12:52:33.647523 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:33.647534 | orchestrator | 2025-11-01 12:52:33.647545 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-11-01 12:52:33.647556 | orchestrator | Saturday 01 November 2025 12:52:31 +0000 (0:00:00.181) 0:01:17.113 ***** 2025-11-01 12:52:33.647567 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'})  2025-11-01 12:52:33.647578 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'})  2025-11-01 12:52:33.647589 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:33.647600 | orchestrator | 2025-11-01 12:52:33.647611 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-11-01 12:52:33.647623 | orchestrator | Saturday 01 November 2025 12:52:31 +0000 (0:00:00.166) 0:01:17.279 ***** 2025-11-01 12:52:33.647636 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:52:33.647649 | orchestrator | 2025-11-01 12:52:33.647661 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-11-01 12:52:33.647673 | orchestrator | Saturday 01 November 2025 12:52:32 +0000 (0:00:00.510) 0:01:17.790 ***** 2025-11-01 12:52:33.647685 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:52:33.647696 | orchestrator | 2025-11-01 12:52:33.647708 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-11-01 12:52:33.647721 | orchestrator | Saturday 01 November 2025 12:52:32 +0000 (0:00:00.518) 0:01:18.309 ***** 2025-11-01 12:52:33.647733 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:52:33.647746 | orchestrator | 2025-11-01 12:52:33.647758 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-11-01 12:52:33.647770 | orchestrator | Saturday 01 November 2025 12:52:32 +0000 (0:00:00.153) 0:01:18.462 ***** 2025-11-01 12:52:33.647783 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'vg_name': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'}) 2025-11-01 12:52:33.647796 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'vg_name': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'}) 2025-11-01 12:52:33.647808 | orchestrator | 2025-11-01 12:52:33.647820 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-11-01 12:52:33.647841 | orchestrator | Saturday 01 November 2025 12:52:32 +0000 (0:00:00.184) 0:01:18.647 ***** 2025-11-01 12:52:33.647870 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'})  2025-11-01 12:52:33.647883 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'})  2025-11-01 12:52:33.647896 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:33.647909 | orchestrator | 2025-11-01 12:52:33.647921 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-11-01 12:52:33.647933 | orchestrator | Saturday 01 November 2025 12:52:33 +0000 (0:00:00.178) 0:01:18.825 ***** 2025-11-01 12:52:33.647945 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'})  2025-11-01 12:52:33.647958 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'})  2025-11-01 12:52:33.647970 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:33.647982 | orchestrator | 2025-11-01 12:52:33.647993 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-11-01 12:52:33.648003 | orchestrator | Saturday 01 November 2025 12:52:33 +0000 (0:00:00.164) 0:01:18.990 ***** 2025-11-01 12:52:33.648014 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'})  2025-11-01 12:52:33.648042 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'})  2025-11-01 12:52:33.648053 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:52:33.648064 | orchestrator | 2025-11-01 12:52:33.648075 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-11-01 12:52:33.648086 | orchestrator | Saturday 01 November 2025 12:52:33 +0000 (0:00:00.174) 0:01:19.165 ***** 2025-11-01 12:52:33.648097 | orchestrator | ok: [testbed-node-5] => { 2025-11-01 12:52:33.648108 | orchestrator |  "lvm_report": { 2025-11-01 12:52:33.648119 | orchestrator |  "lv": [ 2025-11-01 12:52:33.648129 | orchestrator |  { 2025-11-01 12:52:33.648140 | orchestrator |  "lv_name": "osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73", 2025-11-01 12:52:33.648156 | orchestrator |  "vg_name": "ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73" 2025-11-01 12:52:33.648167 | orchestrator |  }, 2025-11-01 12:52:33.648178 | orchestrator |  { 2025-11-01 12:52:33.648189 | orchestrator |  "lv_name": "osd-block-fea132eb-9454-553c-8b4e-faa263198857", 2025-11-01 12:52:33.648217 | orchestrator |  "vg_name": "ceph-fea132eb-9454-553c-8b4e-faa263198857" 2025-11-01 12:52:33.648228 | orchestrator |  } 2025-11-01 12:52:33.648238 | orchestrator |  ], 2025-11-01 12:52:33.648249 | orchestrator |  "pv": [ 2025-11-01 12:52:33.648260 | orchestrator |  { 2025-11-01 12:52:33.648271 | orchestrator |  "pv_name": "/dev/sdb", 2025-11-01 12:52:33.648282 | orchestrator |  "vg_name": "ceph-fea132eb-9454-553c-8b4e-faa263198857" 2025-11-01 12:52:33.648293 | orchestrator |  }, 2025-11-01 12:52:33.648303 | orchestrator |  { 2025-11-01 12:52:33.648314 | orchestrator |  "pv_name": "/dev/sdc", 2025-11-01 12:52:33.648325 | orchestrator |  "vg_name": "ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73" 2025-11-01 12:52:33.648336 | orchestrator |  } 2025-11-01 12:52:33.648347 | orchestrator |  ] 2025-11-01 12:52:33.648358 | orchestrator |  } 2025-11-01 12:52:33.648369 | orchestrator | } 2025-11-01 12:52:33.648380 | orchestrator | 2025-11-01 12:52:33.648391 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:52:33.648410 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-11-01 12:52:33.648421 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-11-01 12:52:33.648432 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-11-01 12:52:33.648443 | orchestrator | 2025-11-01 12:52:33.648454 | orchestrator | 2025-11-01 12:52:33.648464 | orchestrator | 2025-11-01 12:52:33.648475 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:52:33.648486 | orchestrator | Saturday 01 November 2025 12:52:33 +0000 (0:00:00.160) 0:01:19.325 ***** 2025-11-01 12:52:33.648497 | orchestrator | =============================================================================== 2025-11-01 12:52:33.648508 | orchestrator | Create block VGs -------------------------------------------------------- 5.75s 2025-11-01 12:52:33.648519 | orchestrator | Create block LVs -------------------------------------------------------- 4.23s 2025-11-01 12:52:33.648529 | orchestrator | Add known partitions to the list of available block devices ------------- 2.06s 2025-11-01 12:52:33.648540 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.86s 2025-11-01 12:52:33.648551 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.78s 2025-11-01 12:52:33.648562 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.72s 2025-11-01 12:52:33.648572 | orchestrator | Add known links to the list of available block devices ------------------ 1.56s 2025-11-01 12:52:33.648583 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.56s 2025-11-01 12:52:33.648600 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.55s 2025-11-01 12:52:34.156697 | orchestrator | Add known partitions to the list of available block devices ------------- 1.05s 2025-11-01 12:52:34.156794 | orchestrator | Print LVM report data --------------------------------------------------- 1.02s 2025-11-01 12:52:34.156808 | orchestrator | Add known partitions to the list of available block devices ------------- 0.97s 2025-11-01 12:52:34.156818 | orchestrator | Add known links to the list of available block devices ------------------ 0.92s 2025-11-01 12:52:34.156828 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.84s 2025-11-01 12:52:34.156837 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.82s 2025-11-01 12:52:34.156847 | orchestrator | Get initial list of available block devices ----------------------------- 0.77s 2025-11-01 12:52:34.156856 | orchestrator | Print size needed for LVs on ceph_db_devices ---------------------------- 0.76s 2025-11-01 12:52:34.156866 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.76s 2025-11-01 12:52:34.156875 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.76s 2025-11-01 12:52:34.156885 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2025-11-01 12:52:46.867678 | orchestrator | 2025-11-01 12:52:46 | INFO  | Task a51b50f6-e6b3-4d4e-9a4b-899893c55aea (facts) was prepared for execution. 2025-11-01 12:52:46.867776 | orchestrator | 2025-11-01 12:52:46 | INFO  | It takes a moment until task a51b50f6-e6b3-4d4e-9a4b-899893c55aea (facts) has been started and output is visible here. 2025-11-01 12:53:01.373508 | orchestrator | 2025-11-01 12:53:01.373596 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-11-01 12:53:01.373612 | orchestrator | 2025-11-01 12:53:01.373624 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-11-01 12:53:01.373636 | orchestrator | Saturday 01 November 2025 12:52:51 +0000 (0:00:00.294) 0:00:00.294 ***** 2025-11-01 12:53:01.373647 | orchestrator | ok: [testbed-manager] 2025-11-01 12:53:01.373659 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:53:01.373697 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:53:01.373708 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:53:01.373719 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:53:01.373729 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:53:01.373740 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:53:01.373750 | orchestrator | 2025-11-01 12:53:01.373761 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-11-01 12:53:01.373772 | orchestrator | Saturday 01 November 2025 12:52:52 +0000 (0:00:01.240) 0:00:01.534 ***** 2025-11-01 12:53:01.373797 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:53:01.373809 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:53:01.373820 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:53:01.373831 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:53:01.373842 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:53:01.373852 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:53:01.373863 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:53:01.373873 | orchestrator | 2025-11-01 12:53:01.373884 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-01 12:53:01.373895 | orchestrator | 2025-11-01 12:53:01.373906 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-01 12:53:01.373916 | orchestrator | Saturday 01 November 2025 12:52:54 +0000 (0:00:01.371) 0:00:02.905 ***** 2025-11-01 12:53:01.373927 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:53:01.373938 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:53:01.373948 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:53:01.373959 | orchestrator | ok: [testbed-manager] 2025-11-01 12:53:01.373969 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:53:01.373980 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:53:01.373991 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:53:01.374001 | orchestrator | 2025-11-01 12:53:01.374012 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-11-01 12:53:01.374081 | orchestrator | 2025-11-01 12:53:01.374094 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-11-01 12:53:01.374107 | orchestrator | Saturday 01 November 2025 12:53:00 +0000 (0:00:05.915) 0:00:08.821 ***** 2025-11-01 12:53:01.374120 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:53:01.374133 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:53:01.374145 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:53:01.374157 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:53:01.374169 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:53:01.374182 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:53:01.374217 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:53:01.374230 | orchestrator | 2025-11-01 12:53:01.374242 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:53:01.374255 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:53:01.374269 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:53:01.374283 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:53:01.374296 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:53:01.374308 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:53:01.374321 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:53:01.374333 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 12:53:01.374355 | orchestrator | 2025-11-01 12:53:01.374368 | orchestrator | 2025-11-01 12:53:01.374381 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:53:01.374393 | orchestrator | Saturday 01 November 2025 12:53:00 +0000 (0:00:00.612) 0:00:09.433 ***** 2025-11-01 12:53:01.374406 | orchestrator | =============================================================================== 2025-11-01 12:53:01.374418 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.92s 2025-11-01 12:53:01.374429 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.37s 2025-11-01 12:53:01.374440 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.24s 2025-11-01 12:53:01.374451 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2025-11-01 12:53:14.235596 | orchestrator | 2025-11-01 12:53:14 | INFO  | Task 445aeb73-b8ab-4b7d-92e8-adce1bd0c3d4 (frr) was prepared for execution. 2025-11-01 12:53:14.236305 | orchestrator | 2025-11-01 12:53:14 | INFO  | It takes a moment until task 445aeb73-b8ab-4b7d-92e8-adce1bd0c3d4 (frr) has been started and output is visible here. 2025-11-01 12:53:43.624095 | orchestrator | 2025-11-01 12:53:43.624188 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-11-01 12:53:43.624245 | orchestrator | 2025-11-01 12:53:43.624259 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-11-01 12:53:43.624272 | orchestrator | Saturday 01 November 2025 12:53:18 +0000 (0:00:00.252) 0:00:00.252 ***** 2025-11-01 12:53:43.624283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-11-01 12:53:43.624296 | orchestrator | 2025-11-01 12:53:43.624307 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-11-01 12:53:43.624318 | orchestrator | Saturday 01 November 2025 12:53:18 +0000 (0:00:00.241) 0:00:00.494 ***** 2025-11-01 12:53:43.624329 | orchestrator | changed: [testbed-manager] 2025-11-01 12:53:43.624341 | orchestrator | 2025-11-01 12:53:43.624352 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-11-01 12:53:43.624363 | orchestrator | Saturday 01 November 2025 12:53:19 +0000 (0:00:01.128) 0:00:01.622 ***** 2025-11-01 12:53:43.624374 | orchestrator | changed: [testbed-manager] 2025-11-01 12:53:43.624384 | orchestrator | 2025-11-01 12:53:43.624410 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-11-01 12:53:43.624422 | orchestrator | Saturday 01 November 2025 12:53:30 +0000 (0:00:10.290) 0:00:11.913 ***** 2025-11-01 12:53:43.624432 | orchestrator | ok: [testbed-manager] 2025-11-01 12:53:43.624444 | orchestrator | 2025-11-01 12:53:43.624455 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-11-01 12:53:43.624466 | orchestrator | Saturday 01 November 2025 12:53:31 +0000 (0:00:01.099) 0:00:13.012 ***** 2025-11-01 12:53:43.624477 | orchestrator | changed: [testbed-manager] 2025-11-01 12:53:43.624488 | orchestrator | 2025-11-01 12:53:43.624499 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-11-01 12:53:43.624509 | orchestrator | Saturday 01 November 2025 12:53:32 +0000 (0:00:00.963) 0:00:13.976 ***** 2025-11-01 12:53:43.624520 | orchestrator | ok: [testbed-manager] 2025-11-01 12:53:43.624531 | orchestrator | 2025-11-01 12:53:43.624542 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-11-01 12:53:43.624553 | orchestrator | Saturday 01 November 2025 12:53:33 +0000 (0:00:01.416) 0:00:15.392 ***** 2025-11-01 12:53:43.624564 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 12:53:43.624575 | orchestrator | 2025-11-01 12:53:43.624586 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-11-01 12:53:43.624596 | orchestrator | Saturday 01 November 2025 12:53:34 +0000 (0:00:00.935) 0:00:16.328 ***** 2025-11-01 12:53:43.624607 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:53:43.624618 | orchestrator | 2025-11-01 12:53:43.624629 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-11-01 12:53:43.624662 | orchestrator | Saturday 01 November 2025 12:53:34 +0000 (0:00:00.165) 0:00:16.494 ***** 2025-11-01 12:53:43.624675 | orchestrator | changed: [testbed-manager] 2025-11-01 12:53:43.624688 | orchestrator | 2025-11-01 12:53:43.624700 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-11-01 12:53:43.624712 | orchestrator | Saturday 01 November 2025 12:53:35 +0000 (0:00:00.979) 0:00:17.473 ***** 2025-11-01 12:53:43.624724 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-11-01 12:53:43.624736 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-11-01 12:53:43.624750 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-11-01 12:53:43.624762 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-11-01 12:53:43.624774 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-11-01 12:53:43.624786 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-11-01 12:53:43.624797 | orchestrator | 2025-11-01 12:53:43.624810 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-11-01 12:53:43.624822 | orchestrator | Saturday 01 November 2025 12:53:39 +0000 (0:00:04.266) 0:00:21.740 ***** 2025-11-01 12:53:43.624833 | orchestrator | ok: [testbed-manager] 2025-11-01 12:53:43.624846 | orchestrator | 2025-11-01 12:53:43.624858 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-11-01 12:53:43.624869 | orchestrator | Saturday 01 November 2025 12:53:41 +0000 (0:00:01.871) 0:00:23.611 ***** 2025-11-01 12:53:43.624881 | orchestrator | changed: [testbed-manager] 2025-11-01 12:53:43.624893 | orchestrator | 2025-11-01 12:53:43.624905 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:53:43.624917 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 12:53:43.624928 | orchestrator | 2025-11-01 12:53:43.624940 | orchestrator | 2025-11-01 12:53:43.624953 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:53:43.624965 | orchestrator | Saturday 01 November 2025 12:53:43 +0000 (0:00:01.413) 0:00:25.025 ***** 2025-11-01 12:53:43.624977 | orchestrator | =============================================================================== 2025-11-01 12:53:43.624989 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.29s 2025-11-01 12:53:43.625001 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 4.27s 2025-11-01 12:53:43.625011 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.87s 2025-11-01 12:53:43.625022 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.42s 2025-11-01 12:53:43.625049 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.41s 2025-11-01 12:53:43.625061 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.13s 2025-11-01 12:53:43.625071 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.10s 2025-11-01 12:53:43.625082 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.98s 2025-11-01 12:53:43.625093 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.96s 2025-11-01 12:53:43.625103 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.94s 2025-11-01 12:53:43.625114 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.24s 2025-11-01 12:53:43.625125 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.17s 2025-11-01 12:53:44.020164 | orchestrator | 2025-11-01 12:53:44.023954 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Nov 1 12:53:44 UTC 2025 2025-11-01 12:53:44.024013 | orchestrator | 2025-11-01 12:53:46.320356 | orchestrator | 2025-11-01 12:53:46 | INFO  | Collection nutshell is prepared for execution 2025-11-01 12:53:46.320443 | orchestrator | 2025-11-01 12:53:46 | INFO  | D [0] - dotfiles 2025-11-01 12:53:56.458417 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [0] - homer 2025-11-01 12:53:56.458477 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [0] - netdata 2025-11-01 12:53:56.458830 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [0] - openstackclient 2025-11-01 12:53:56.459185 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [0] - phpmyadmin 2025-11-01 12:53:56.459618 | orchestrator | 2025-11-01 12:53:56 | INFO  | A [0] - common 2025-11-01 12:53:56.465063 | orchestrator | 2025-11-01 12:53:56 | INFO  | A [1] -- loadbalancer 2025-11-01 12:53:56.465555 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [2] --- opensearch 2025-11-01 12:53:56.466057 | orchestrator | 2025-11-01 12:53:56 | INFO  | A [2] --- mariadb-ng 2025-11-01 12:53:56.466344 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [3] ---- horizon 2025-11-01 12:53:56.466368 | orchestrator | 2025-11-01 12:53:56 | INFO  | A [3] ---- keystone 2025-11-01 12:53:56.467748 | orchestrator | 2025-11-01 12:53:56 | INFO  | A [4] ----- neutron 2025-11-01 12:53:56.467769 | orchestrator | 2025-11-01 12:53:56 | INFO  | A [5] ------ wait-for-nova 2025-11-01 12:53:56.467781 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [6] ------- octavia 2025-11-01 12:53:56.469335 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [4] ----- barbican 2025-11-01 12:53:56.469357 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [4] ----- designate 2025-11-01 12:53:56.469368 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [4] ----- ironic 2025-11-01 12:53:56.469644 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [4] ----- placement 2025-11-01 12:53:56.469770 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [4] ----- magnum 2025-11-01 12:53:56.470492 | orchestrator | 2025-11-01 12:53:56 | INFO  | A [1] -- openvswitch 2025-11-01 12:53:56.470922 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [2] --- ovn 2025-11-01 12:53:56.470942 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [1] -- memcached 2025-11-01 12:53:56.470953 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [1] -- redis 2025-11-01 12:53:56.471114 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [1] -- rabbitmq-ng 2025-11-01 12:53:56.471500 | orchestrator | 2025-11-01 12:53:56 | INFO  | A [0] - kubernetes 2025-11-01 12:53:56.474809 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [1] -- kubeconfig 2025-11-01 12:53:56.474831 | orchestrator | 2025-11-01 12:53:56 | INFO  | A [1] -- copy-kubeconfig 2025-11-01 12:53:56.474843 | orchestrator | 2025-11-01 12:53:56 | INFO  | A [0] - ceph 2025-11-01 12:53:56.477816 | orchestrator | 2025-11-01 12:53:56 | INFO  | A [1] -- ceph-pools 2025-11-01 12:53:56.477836 | orchestrator | 2025-11-01 12:53:56 | INFO  | A [2] --- copy-ceph-keys 2025-11-01 12:53:56.478090 | orchestrator | 2025-11-01 12:53:56 | INFO  | A [3] ---- cephclient 2025-11-01 12:53:56.478406 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-11-01 12:53:56.478426 | orchestrator | 2025-11-01 12:53:56 | INFO  | A [4] ----- wait-for-keystone 2025-11-01 12:53:56.478437 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [5] ------ kolla-ceph-rgw 2025-11-01 12:53:56.478756 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [5] ------ glance 2025-11-01 12:53:56.479122 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [5] ------ cinder 2025-11-01 12:53:56.479141 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [5] ------ nova 2025-11-01 12:53:56.479713 | orchestrator | 2025-11-01 12:53:56 | INFO  | A [4] ----- prometheus 2025-11-01 12:53:56.479734 | orchestrator | 2025-11-01 12:53:56 | INFO  | D [5] ------ grafana 2025-11-01 12:53:56.720093 | orchestrator | 2025-11-01 12:53:56 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-11-01 12:53:56.721151 | orchestrator | 2025-11-01 12:53:56 | INFO  | Tasks are running in the background 2025-11-01 12:54:00.299080 | orchestrator | 2025-11-01 12:54:00 | INFO  | No task IDs specified, wait for all currently running tasks 2025-11-01 12:54:02.481879 | orchestrator | 2025-11-01 12:54:02 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:54:02.482991 | orchestrator | 2025-11-01 12:54:02 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:54:02.485302 | orchestrator | 2025-11-01 12:54:02 | INFO  | Task 9405e274-8fc0-4101-aad0-c81f5702d362 is in state STARTED 2025-11-01 12:54:02.486258 | orchestrator | 2025-11-01 12:54:02 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state STARTED 2025-11-01 12:54:02.489545 | orchestrator | 2025-11-01 12:54:02 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:54:02.491124 | orchestrator | 2025-11-01 12:54:02 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:54:02.495625 | orchestrator | 2025-11-01 12:54:02 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:54:02.495658 | orchestrator | 2025-11-01 12:54:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:54:05.565039 | orchestrator | 2025-11-01 12:54:05 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:54:05.565358 | orchestrator | 2025-11-01 12:54:05 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:54:05.566713 | orchestrator | 2025-11-01 12:54:05 | INFO  | Task 9405e274-8fc0-4101-aad0-c81f5702d362 is in state STARTED 2025-11-01 12:54:05.569240 | orchestrator | 2025-11-01 12:54:05 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state STARTED 2025-11-01 12:54:05.570272 | orchestrator | 2025-11-01 12:54:05 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:54:05.571444 | orchestrator | 2025-11-01 12:54:05 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:54:05.572561 | orchestrator | 2025-11-01 12:54:05 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:54:05.573293 | orchestrator | 2025-11-01 12:54:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:54:08.634618 | orchestrator | 2025-11-01 12:54:08 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:54:08.638577 | orchestrator | 2025-11-01 12:54:08 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:54:08.643630 | orchestrator | 2025-11-01 12:54:08 | INFO  | Task 9405e274-8fc0-4101-aad0-c81f5702d362 is in state STARTED 2025-11-01 12:54:08.647654 | orchestrator | 2025-11-01 12:54:08 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state STARTED 2025-11-01 12:54:08.648823 | orchestrator | 2025-11-01 12:54:08 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:54:08.654525 | orchestrator | 2025-11-01 12:54:08 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:54:08.657616 | orchestrator | 2025-11-01 12:54:08 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:54:08.657638 | orchestrator | 2025-11-01 12:54:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:54:11.719283 | orchestrator | 2025-11-01 12:54:11 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:54:11.722666 | orchestrator | 2025-11-01 12:54:11 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:54:11.726076 | orchestrator | 2025-11-01 12:54:11 | INFO  | Task 9405e274-8fc0-4101-aad0-c81f5702d362 is in state STARTED 2025-11-01 12:54:11.728405 | orchestrator | 2025-11-01 12:54:11 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state STARTED 2025-11-01 12:54:11.730454 | orchestrator | 2025-11-01 12:54:11 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:54:11.733877 | orchestrator | 2025-11-01 12:54:11 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:54:11.735845 | orchestrator | 2025-11-01 12:54:11 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:54:11.736146 | orchestrator | 2025-11-01 12:54:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:54:14.788988 | orchestrator | 2025-11-01 12:54:14 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:54:14.789546 | orchestrator | 2025-11-01 12:54:14 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:54:14.790470 | orchestrator | 2025-11-01 12:54:14 | INFO  | Task 9405e274-8fc0-4101-aad0-c81f5702d362 is in state STARTED 2025-11-01 12:54:14.791476 | orchestrator | 2025-11-01 12:54:14 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state STARTED 2025-11-01 12:54:14.796119 | orchestrator | 2025-11-01 12:54:14 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:54:14.797160 | orchestrator | 2025-11-01 12:54:14 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:54:14.801965 | orchestrator | 2025-11-01 12:54:14 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:54:14.801986 | orchestrator | 2025-11-01 12:54:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:54:17.866500 | orchestrator | 2025-11-01 12:54:17 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:54:17.869868 | orchestrator | 2025-11-01 12:54:17 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:54:17.874053 | orchestrator | 2025-11-01 12:54:17 | INFO  | Task 9405e274-8fc0-4101-aad0-c81f5702d362 is in state STARTED 2025-11-01 12:54:17.875371 | orchestrator | 2025-11-01 12:54:17 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state STARTED 2025-11-01 12:54:17.876532 | orchestrator | 2025-11-01 12:54:17 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:54:17.879917 | orchestrator | 2025-11-01 12:54:17 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:54:17.882530 | orchestrator | 2025-11-01 12:54:17 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:54:17.882640 | orchestrator | 2025-11-01 12:54:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:54:21.132148 | orchestrator | 2025-11-01 12:54:21 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:54:21.133598 | orchestrator | 2025-11-01 12:54:21 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:54:21.137041 | orchestrator | 2025-11-01 12:54:21 | INFO  | Task 9405e274-8fc0-4101-aad0-c81f5702d362 is in state STARTED 2025-11-01 12:54:21.140271 | orchestrator | 2025-11-01 12:54:21 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state STARTED 2025-11-01 12:54:21.145989 | orchestrator | 2025-11-01 12:54:21 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:54:21.150386 | orchestrator | 2025-11-01 12:54:21 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:54:21.153133 | orchestrator | 2025-11-01 12:54:21 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:54:21.153153 | orchestrator | 2025-11-01 12:54:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:54:24.343416 | orchestrator | 2025-11-01 12:54:24 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:54:24.343510 | orchestrator | 2025-11-01 12:54:24 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:54:24.343524 | orchestrator | 2025-11-01 12:54:24 | INFO  | Task 9405e274-8fc0-4101-aad0-c81f5702d362 is in state STARTED 2025-11-01 12:54:24.343535 | orchestrator | 2025-11-01 12:54:24 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state STARTED 2025-11-01 12:54:24.344044 | orchestrator | 2025-11-01 12:54:24 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:54:24.347291 | orchestrator | 2025-11-01 12:54:24 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:54:24.349090 | orchestrator | 2025-11-01 12:54:24 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:54:24.349110 | orchestrator | 2025-11-01 12:54:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:54:27.427839 | orchestrator | 2025-11-01 12:54:27 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:54:27.429471 | orchestrator | 2025-11-01 12:54:27 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:54:27.446914 | orchestrator | 2025-11-01 12:54:27 | INFO  | Task 9405e274-8fc0-4101-aad0-c81f5702d362 is in state STARTED 2025-11-01 12:54:27.475180 | orchestrator | 2025-11-01 12:54:27 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state STARTED 2025-11-01 12:54:27.475261 | orchestrator | 2025-11-01 12:54:27 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:54:27.475274 | orchestrator | 2025-11-01 12:54:27 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:54:27.475285 | orchestrator | 2025-11-01 12:54:27 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:54:27.475297 | orchestrator | 2025-11-01 12:54:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:54:30.539054 | orchestrator | 2025-11-01 12:54:30 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:54:30.539151 | orchestrator | 2025-11-01 12:54:30 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:54:30.539164 | orchestrator | 2025-11-01 12:54:30 | INFO  | Task 9405e274-8fc0-4101-aad0-c81f5702d362 is in state STARTED 2025-11-01 12:54:30.539175 | orchestrator | 2025-11-01 12:54:30 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state STARTED 2025-11-01 12:54:30.539252 | orchestrator | 2025-11-01 12:54:30 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:54:30.539266 | orchestrator | 2025-11-01 12:54:30 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:54:30.539277 | orchestrator | 2025-11-01 12:54:30 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:54:30.539288 | orchestrator | 2025-11-01 12:54:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:54:33.692359 | orchestrator | 2025-11-01 12:54:33 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:54:33.704611 | orchestrator | 2025-11-01 12:54:33 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:54:33.707409 | orchestrator | 2025-11-01 12:54:33 | INFO  | Task 9405e274-8fc0-4101-aad0-c81f5702d362 is in state STARTED 2025-11-01 12:54:33.715132 | orchestrator | 2025-11-01 12:54:33 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state STARTED 2025-11-01 12:54:33.723755 | orchestrator | 2025-11-01 12:54:33 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:54:33.736847 | orchestrator | 2025-11-01 12:54:33 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:54:33.745913 | orchestrator | 2025-11-01 12:54:33 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:54:33.745941 | orchestrator | 2025-11-01 12:54:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:54:36.843004 | orchestrator | 2025-11-01 12:54:36 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:54:36.844035 | orchestrator | 2025-11-01 12:54:36 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:54:36.844989 | orchestrator | 2025-11-01 12:54:36 | INFO  | Task 9405e274-8fc0-4101-aad0-c81f5702d362 is in state STARTED 2025-11-01 12:54:36.846372 | orchestrator | 2025-11-01 12:54:36 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state STARTED 2025-11-01 12:54:36.850608 | orchestrator | 2025-11-01 12:54:36 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:54:36.853432 | orchestrator | 2025-11-01 12:54:36 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:54:36.858096 | orchestrator | 2025-11-01 12:54:36 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:54:36.858123 | orchestrator | 2025-11-01 12:54:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:54:39.981482 | orchestrator | 2025-11-01 12:54:39.981580 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-11-01 12:54:39.981597 | orchestrator | 2025-11-01 12:54:39.981609 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-11-01 12:54:39.981621 | orchestrator | Saturday 01 November 2025 12:54:17 +0000 (0:00:01.161) 0:00:01.161 ***** 2025-11-01 12:54:39.981632 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:54:39.981644 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:54:39.981654 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:54:39.981665 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:54:39.981676 | orchestrator | changed: [testbed-manager] 2025-11-01 12:54:39.981686 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:54:39.981697 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:54:39.981708 | orchestrator | 2025-11-01 12:54:39.981719 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-11-01 12:54:39.981729 | orchestrator | Saturday 01 November 2025 12:54:23 +0000 (0:00:05.764) 0:00:06.925 ***** 2025-11-01 12:54:39.981740 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-11-01 12:54:39.981752 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-11-01 12:54:39.981762 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-11-01 12:54:39.981773 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-11-01 12:54:39.981784 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-11-01 12:54:39.981794 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-11-01 12:54:39.981805 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-11-01 12:54:39.981845 | orchestrator | 2025-11-01 12:54:39.981857 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-11-01 12:54:39.981868 | orchestrator | Saturday 01 November 2025 12:54:25 +0000 (0:00:02.413) 0:00:09.338 ***** 2025-11-01 12:54:39.981896 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-01 12:54:24.867061', 'end': '2025-11-01 12:54:24.983213', 'delta': '0:00:00.116152', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-01 12:54:39.981913 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-01 12:54:24.514663', 'end': '2025-11-01 12:54:24.524732', 'delta': '0:00:00.010069', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-01 12:54:39.981925 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-01 12:54:24.531568', 'end': '2025-11-01 12:54:24.540771', 'delta': '0:00:00.009203', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-01 12:54:39.981965 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-01 12:54:24.556060', 'end': '2025-11-01 12:54:24.563214', 'delta': '0:00:00.007154', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-01 12:54:39.981978 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-01 12:54:24.630008', 'end': '2025-11-01 12:54:24.641767', 'delta': '0:00:00.011759', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-01 12:54:39.982005 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-01 12:54:25.520603', 'end': '2025-11-01 12:54:25.526138', 'delta': '0:00:00.005535', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-01 12:54:39.982067 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-11-01 12:54:25.559478', 'end': '2025-11-01 12:54:25.571048', 'delta': '0:00:00.011570', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-11-01 12:54:39.982082 | orchestrator | 2025-11-01 12:54:39.982095 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-11-01 12:54:39.982108 | orchestrator | Saturday 01 November 2025 12:54:29 +0000 (0:00:03.947) 0:00:13.286 ***** 2025-11-01 12:54:39.982122 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-11-01 12:54:39.982135 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-11-01 12:54:39.982148 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-11-01 12:54:39.982160 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-11-01 12:54:39.982173 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-11-01 12:54:39.982186 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-11-01 12:54:39.982198 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-11-01 12:54:39.982234 | orchestrator | 2025-11-01 12:54:39.982247 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-11-01 12:54:39.982260 | orchestrator | Saturday 01 November 2025 12:54:33 +0000 (0:00:03.433) 0:00:16.720 ***** 2025-11-01 12:54:39.982271 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-11-01 12:54:39.982282 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-11-01 12:54:39.982292 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-11-01 12:54:39.982303 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-11-01 12:54:39.982314 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-11-01 12:54:39.982324 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-11-01 12:54:39.982335 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-11-01 12:54:39.982346 | orchestrator | 2025-11-01 12:54:39.982357 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:54:39.982377 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:54:39.982398 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:54:39.982410 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:54:39.982420 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:54:39.982431 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:54:39.982442 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:54:39.982453 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:54:39.982464 | orchestrator | 2025-11-01 12:54:39.982475 | orchestrator | 2025-11-01 12:54:39.982485 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:54:39.982496 | orchestrator | Saturday 01 November 2025 12:54:37 +0000 (0:00:04.489) 0:00:21.209 ***** 2025-11-01 12:54:39.982507 | orchestrator | =============================================================================== 2025-11-01 12:54:39.982518 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 5.76s 2025-11-01 12:54:39.982529 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.49s 2025-11-01 12:54:39.982539 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.95s 2025-11-01 12:54:39.982550 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 3.43s 2025-11-01 12:54:39.982562 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.41s 2025-11-01 12:54:39.982572 | orchestrator | 2025-11-01 12:54:39 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:54:39.982584 | orchestrator | 2025-11-01 12:54:39 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:54:39.982595 | orchestrator | 2025-11-01 12:54:39 | INFO  | Task 9405e274-8fc0-4101-aad0-c81f5702d362 is in state SUCCESS 2025-11-01 12:54:40.016249 | orchestrator | 2025-11-01 12:54:39 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state STARTED 2025-11-01 12:54:40.016311 | orchestrator | 2025-11-01 12:54:39 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:54:40.016325 | orchestrator | 2025-11-01 12:54:39 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:54:40.016338 | orchestrator | 2025-11-01 12:54:39 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:54:40.016351 | orchestrator | 2025-11-01 12:54:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:54:43.301305 | orchestrator | 2025-11-01 12:54:43 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:54:43.301396 | orchestrator | 2025-11-01 12:54:43 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:54:43.301409 | orchestrator | 2025-11-01 12:54:43 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:54:43.317804 | orchestrator | 2025-11-01 12:54:43 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state STARTED 2025-11-01 12:54:43.317835 | orchestrator | 2025-11-01 12:54:43 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:54:43.317846 | orchestrator | 2025-11-01 12:54:43 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:54:43.318685 | orchestrator | 2025-11-01 12:54:43 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:54:43.318707 | orchestrator | 2025-11-01 12:54:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:54:46.423530 | orchestrator | 2025-11-01 12:54:46 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:54:46.423627 | orchestrator | 2025-11-01 12:54:46 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:54:46.423642 | orchestrator | 2025-11-01 12:54:46 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:54:46.423654 | orchestrator | 2025-11-01 12:54:46 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state STARTED 2025-11-01 12:54:46.423665 | orchestrator | 2025-11-01 12:54:46 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:54:46.423676 | orchestrator | 2025-11-01 12:54:46 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:54:46.423687 | orchestrator | 2025-11-01 12:54:46 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:54:46.423698 | orchestrator | 2025-11-01 12:54:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:54:49.631327 | orchestrator | 2025-11-01 12:54:49 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:54:49.631392 | orchestrator | 2025-11-01 12:54:49 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:54:49.631402 | orchestrator | 2025-11-01 12:54:49 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:54:49.631410 | orchestrator | 2025-11-01 12:54:49 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state STARTED 2025-11-01 12:54:49.631657 | orchestrator | 2025-11-01 12:54:49 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:54:49.631667 | orchestrator | 2025-11-01 12:54:49 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:54:49.631674 | orchestrator | 2025-11-01 12:54:49 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:54:49.631682 | orchestrator | 2025-11-01 12:54:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:54:52.648633 | orchestrator | 2025-11-01 12:54:52 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:54:52.649071 | orchestrator | 2025-11-01 12:54:52 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:54:52.652153 | orchestrator | 2025-11-01 12:54:52 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:54:52.652284 | orchestrator | 2025-11-01 12:54:52 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state STARTED 2025-11-01 12:54:52.753566 | orchestrator | 2025-11-01 12:54:52 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:54:52.753596 | orchestrator | 2025-11-01 12:54:52 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:54:52.753608 | orchestrator | 2025-11-01 12:54:52 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:54:52.753620 | orchestrator | 2025-11-01 12:54:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:54:55.762369 | orchestrator | 2025-11-01 12:54:55 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:54:55.762457 | orchestrator | 2025-11-01 12:54:55 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:54:55.762499 | orchestrator | 2025-11-01 12:54:55 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:54:55.762511 | orchestrator | 2025-11-01 12:54:55 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state STARTED 2025-11-01 12:54:55.762522 | orchestrator | 2025-11-01 12:54:55 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:54:55.762533 | orchestrator | 2025-11-01 12:54:55 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:54:55.762557 | orchestrator | 2025-11-01 12:54:55 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:54:55.762569 | orchestrator | 2025-11-01 12:54:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:54:58.843258 | orchestrator | 2025-11-01 12:54:58 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:54:58.843308 | orchestrator | 2025-11-01 12:54:58 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:54:58.843321 | orchestrator | 2025-11-01 12:54:58 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:54:58.843333 | orchestrator | 2025-11-01 12:54:58 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state STARTED 2025-11-01 12:54:58.843344 | orchestrator | 2025-11-01 12:54:58 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:54:58.843355 | orchestrator | 2025-11-01 12:54:58 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:54:58.843366 | orchestrator | 2025-11-01 12:54:58 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:54:58.843376 | orchestrator | 2025-11-01 12:54:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:55:01.994375 | orchestrator | 2025-11-01 12:55:01 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:55:02.001859 | orchestrator | 2025-11-01 12:55:01 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:55:02.001888 | orchestrator | 2025-11-01 12:55:01 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:55:02.005716 | orchestrator | 2025-11-01 12:55:02 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state STARTED 2025-11-01 12:55:02.015091 | orchestrator | 2025-11-01 12:55:02 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:55:02.020566 | orchestrator | 2025-11-01 12:55:02 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:55:02.031260 | orchestrator | 2025-11-01 12:55:02 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:55:02.033538 | orchestrator | 2025-11-01 12:55:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:55:05.268350 | orchestrator | 2025-11-01 12:55:05 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:55:05.268419 | orchestrator | 2025-11-01 12:55:05 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:55:05.268431 | orchestrator | 2025-11-01 12:55:05 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:55:05.268443 | orchestrator | 2025-11-01 12:55:05 | INFO  | Task 597e147a-623e-4fc4-b601-cc9a1c069133 is in state SUCCESS 2025-11-01 12:55:05.268454 | orchestrator | 2025-11-01 12:55:05 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:55:05.268465 | orchestrator | 2025-11-01 12:55:05 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:55:05.268498 | orchestrator | 2025-11-01 12:55:05 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:55:05.268510 | orchestrator | 2025-11-01 12:55:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:55:09.855485 | orchestrator | 2025-11-01 12:55:08 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:55:09.855555 | orchestrator | 2025-11-01 12:55:08 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:55:09.855567 | orchestrator | 2025-11-01 12:55:08 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:55:09.855576 | orchestrator | 2025-11-01 12:55:08 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:55:09.855585 | orchestrator | 2025-11-01 12:55:08 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:55:09.855593 | orchestrator | 2025-11-01 12:55:08 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:55:09.855602 | orchestrator | 2025-11-01 12:55:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:55:11.474458 | orchestrator | 2025-11-01 12:55:11 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:55:11.474511 | orchestrator | 2025-11-01 12:55:11 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:55:11.474539 | orchestrator | 2025-11-01 12:55:11 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:55:11.474551 | orchestrator | 2025-11-01 12:55:11 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:55:11.474563 | orchestrator | 2025-11-01 12:55:11 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:55:11.478338 | orchestrator | 2025-11-01 12:55:11 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:55:11.480465 | orchestrator | 2025-11-01 12:55:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:55:14.647035 | orchestrator | 2025-11-01 12:55:14 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:55:14.647106 | orchestrator | 2025-11-01 12:55:14 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:55:14.647119 | orchestrator | 2025-11-01 12:55:14 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:55:14.647130 | orchestrator | 2025-11-01 12:55:14 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:55:14.647141 | orchestrator | 2025-11-01 12:55:14 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:55:14.647152 | orchestrator | 2025-11-01 12:55:14 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:55:14.647163 | orchestrator | 2025-11-01 12:55:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:55:17.801765 | orchestrator | 2025-11-01 12:55:17 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:55:17.801857 | orchestrator | 2025-11-01 12:55:17 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:55:17.839911 | orchestrator | 2025-11-01 12:55:17 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:55:17.839974 | orchestrator | 2025-11-01 12:55:17 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:55:17.839987 | orchestrator | 2025-11-01 12:55:17 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:55:17.840025 | orchestrator | 2025-11-01 12:55:17 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:55:17.840037 | orchestrator | 2025-11-01 12:55:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:55:20.942730 | orchestrator | 2025-11-01 12:55:20 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:55:20.944144 | orchestrator | 2025-11-01 12:55:20 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state STARTED 2025-11-01 12:55:20.945368 | orchestrator | 2025-11-01 12:55:20 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:55:20.947852 | orchestrator | 2025-11-01 12:55:20 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:55:20.951870 | orchestrator | 2025-11-01 12:55:20 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:55:21.007860 | orchestrator | 2025-11-01 12:55:20 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:55:21.007895 | orchestrator | 2025-11-01 12:55:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:55:24.016438 | orchestrator | 2025-11-01 12:55:24 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:55:24.016524 | orchestrator | 2025-11-01 12:55:24 | INFO  | Task 9c4d72f2-e085-4a53-a17e-b6a46607f960 is in state SUCCESS 2025-11-01 12:55:24.018063 | orchestrator | 2025-11-01 12:55:24 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:55:24.018775 | orchestrator | 2025-11-01 12:55:24 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:55:24.019168 | orchestrator | 2025-11-01 12:55:24 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:55:24.019947 | orchestrator | 2025-11-01 12:55:24 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:55:24.019962 | orchestrator | 2025-11-01 12:55:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:55:27.151559 | orchestrator | 2025-11-01 12:55:27 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:55:27.152686 | orchestrator | 2025-11-01 12:55:27 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:55:27.153901 | orchestrator | 2025-11-01 12:55:27 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:55:27.154674 | orchestrator | 2025-11-01 12:55:27 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:55:27.156126 | orchestrator | 2025-11-01 12:55:27 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:55:27.156148 | orchestrator | 2025-11-01 12:55:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:55:30.249029 | orchestrator | 2025-11-01 12:55:30 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:55:30.249119 | orchestrator | 2025-11-01 12:55:30 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:55:30.249132 | orchestrator | 2025-11-01 12:55:30 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:55:30.249142 | orchestrator | 2025-11-01 12:55:30 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:55:30.249152 | orchestrator | 2025-11-01 12:55:30 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:55:30.249162 | orchestrator | 2025-11-01 12:55:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:55:33.307018 | orchestrator | 2025-11-01 12:55:33 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:55:33.349525 | orchestrator | 2025-11-01 12:55:33 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:55:33.349577 | orchestrator | 2025-11-01 12:55:33 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:55:33.349588 | orchestrator | 2025-11-01 12:55:33 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:55:33.349599 | orchestrator | 2025-11-01 12:55:33 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:55:33.349609 | orchestrator | 2025-11-01 12:55:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:55:36.400697 | orchestrator | 2025-11-01 12:55:36 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:55:36.402724 | orchestrator | 2025-11-01 12:55:36 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:55:36.403957 | orchestrator | 2025-11-01 12:55:36 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:55:36.414891 | orchestrator | 2025-11-01 12:55:36 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:55:36.420006 | orchestrator | 2025-11-01 12:55:36 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:55:36.420035 | orchestrator | 2025-11-01 12:55:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:55:39.654500 | orchestrator | 2025-11-01 12:55:39 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:55:39.662151 | orchestrator | 2025-11-01 12:55:39 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:55:39.673052 | orchestrator | 2025-11-01 12:55:39 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:55:39.686634 | orchestrator | 2025-11-01 12:55:39 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:55:39.691720 | orchestrator | 2025-11-01 12:55:39 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:55:39.691753 | orchestrator | 2025-11-01 12:55:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:55:42.839143 | orchestrator | 2025-11-01 12:55:42 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:55:42.839272 | orchestrator | 2025-11-01 12:55:42 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:55:42.839287 | orchestrator | 2025-11-01 12:55:42 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:55:42.867673 | orchestrator | 2025-11-01 12:55:42 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:55:42.867768 | orchestrator | 2025-11-01 12:55:42 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:55:42.867785 | orchestrator | 2025-11-01 12:55:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:55:45.883424 | orchestrator | 2025-11-01 12:55:45 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:55:45.887850 | orchestrator | 2025-11-01 12:55:45 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:55:45.889608 | orchestrator | 2025-11-01 12:55:45 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:55:45.891057 | orchestrator | 2025-11-01 12:55:45 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:55:45.893154 | orchestrator | 2025-11-01 12:55:45 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:55:45.893239 | orchestrator | 2025-11-01 12:55:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:55:48.939298 | orchestrator | 2025-11-01 12:55:48 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:55:48.941371 | orchestrator | 2025-11-01 12:55:48 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:55:48.944066 | orchestrator | 2025-11-01 12:55:48 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:55:48.946850 | orchestrator | 2025-11-01 12:55:48 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:55:48.950063 | orchestrator | 2025-11-01 12:55:48 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:55:48.950083 | orchestrator | 2025-11-01 12:55:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:55:52.029650 | orchestrator | 2025-11-01 12:55:52 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:55:52.031314 | orchestrator | 2025-11-01 12:55:52 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:55:52.033583 | orchestrator | 2025-11-01 12:55:52 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:55:52.036344 | orchestrator | 2025-11-01 12:55:52 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:55:52.039982 | orchestrator | 2025-11-01 12:55:52 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state STARTED 2025-11-01 12:55:52.040007 | orchestrator | 2025-11-01 12:55:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:55:55.081173 | orchestrator | 2025-11-01 12:55:55 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:55:55.082865 | orchestrator | 2025-11-01 12:55:55 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:55:55.085277 | orchestrator | 2025-11-01 12:55:55 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:55:55.086278 | orchestrator | 2025-11-01 12:55:55 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:55:55.087138 | orchestrator | 2025-11-01 12:55:55 | INFO  | Task 0a04edaf-e6c2-4068-bce0-e474a12d35b4 is in state SUCCESS 2025-11-01 12:55:55.087160 | orchestrator | 2025-11-01 12:55:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:55:55.089786 | orchestrator | 2025-11-01 12:55:55.089825 | orchestrator | 2025-11-01 12:55:55.089837 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-11-01 12:55:55.089849 | orchestrator | 2025-11-01 12:55:55.089860 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-11-01 12:55:55.089871 | orchestrator | Saturday 01 November 2025 12:54:16 +0000 (0:00:00.713) 0:00:00.713 ***** 2025-11-01 12:55:55.089882 | orchestrator | ok: [testbed-manager] => { 2025-11-01 12:55:55.089895 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-11-01 12:55:55.089907 | orchestrator | } 2025-11-01 12:55:55.089919 | orchestrator | 2025-11-01 12:55:55.089930 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-11-01 12:55:55.089941 | orchestrator | Saturday 01 November 2025 12:54:17 +0000 (0:00:00.503) 0:00:01.216 ***** 2025-11-01 12:55:55.089952 | orchestrator | ok: [testbed-manager] 2025-11-01 12:55:55.089964 | orchestrator | 2025-11-01 12:55:55.089974 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-11-01 12:55:55.089985 | orchestrator | Saturday 01 November 2025 12:54:19 +0000 (0:00:02.180) 0:00:03.397 ***** 2025-11-01 12:55:55.090084 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-11-01 12:55:55.090100 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-11-01 12:55:55.090111 | orchestrator | 2025-11-01 12:55:55.090122 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-11-01 12:55:55.090132 | orchestrator | Saturday 01 November 2025 12:54:22 +0000 (0:00:02.655) 0:00:06.052 ***** 2025-11-01 12:55:55.090143 | orchestrator | changed: [testbed-manager] 2025-11-01 12:55:55.090154 | orchestrator | 2025-11-01 12:55:55.090165 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-11-01 12:55:55.090175 | orchestrator | Saturday 01 November 2025 12:54:28 +0000 (0:00:05.812) 0:00:11.864 ***** 2025-11-01 12:55:55.090186 | orchestrator | changed: [testbed-manager] 2025-11-01 12:55:55.090197 | orchestrator | 2025-11-01 12:55:55.090230 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-11-01 12:55:55.090241 | orchestrator | Saturday 01 November 2025 12:54:31 +0000 (0:00:02.944) 0:00:14.809 ***** 2025-11-01 12:55:55.090257 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-11-01 12:55:55.090269 | orchestrator | ok: [testbed-manager] 2025-11-01 12:55:55.090280 | orchestrator | 2025-11-01 12:55:55.090291 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-11-01 12:55:55.090302 | orchestrator | Saturday 01 November 2025 12:54:59 +0000 (0:00:28.150) 0:00:42.959 ***** 2025-11-01 12:55:55.090312 | orchestrator | changed: [testbed-manager] 2025-11-01 12:55:55.090323 | orchestrator | 2025-11-01 12:55:55.090334 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:55:55.090346 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:55:55.090359 | orchestrator | 2025-11-01 12:55:55.090371 | orchestrator | 2025-11-01 12:55:55.090383 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:55:55.090395 | orchestrator | Saturday 01 November 2025 12:55:01 +0000 (0:00:02.724) 0:00:45.683 ***** 2025-11-01 12:55:55.090408 | orchestrator | =============================================================================== 2025-11-01 12:55:55.090420 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 28.15s 2025-11-01 12:55:55.090432 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 5.81s 2025-11-01 12:55:55.090444 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.94s 2025-11-01 12:55:55.090457 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.72s 2025-11-01 12:55:55.090470 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.66s 2025-11-01 12:55:55.090482 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.18s 2025-11-01 12:55:55.090494 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.50s 2025-11-01 12:55:55.090506 | orchestrator | 2025-11-01 12:55:55.090518 | orchestrator | 2025-11-01 12:55:55.090530 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-11-01 12:55:55.090543 | orchestrator | 2025-11-01 12:55:55.090555 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-11-01 12:55:55.090567 | orchestrator | Saturday 01 November 2025 12:54:19 +0000 (0:00:01.235) 0:00:01.235 ***** 2025-11-01 12:55:55.090580 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-11-01 12:55:55.090594 | orchestrator | 2025-11-01 12:55:55.090606 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-11-01 12:55:55.090618 | orchestrator | Saturday 01 November 2025 12:54:20 +0000 (0:00:01.249) 0:00:02.484 ***** 2025-11-01 12:55:55.090631 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-11-01 12:55:55.090643 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-11-01 12:55:55.090661 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-11-01 12:55:55.090672 | orchestrator | 2025-11-01 12:55:55.090683 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-11-01 12:55:55.090694 | orchestrator | Saturday 01 November 2025 12:54:23 +0000 (0:00:02.410) 0:00:04.895 ***** 2025-11-01 12:55:55.090705 | orchestrator | changed: [testbed-manager] 2025-11-01 12:55:55.090716 | orchestrator | 2025-11-01 12:55:55.090726 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-11-01 12:55:55.090737 | orchestrator | Saturday 01 November 2025 12:54:28 +0000 (0:00:05.484) 0:00:10.379 ***** 2025-11-01 12:55:55.090760 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-11-01 12:55:55.090772 | orchestrator | ok: [testbed-manager] 2025-11-01 12:55:55.090783 | orchestrator | 2025-11-01 12:55:55.090794 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-11-01 12:55:55.090805 | orchestrator | Saturday 01 November 2025 12:55:06 +0000 (0:00:38.253) 0:00:48.633 ***** 2025-11-01 12:55:55.090815 | orchestrator | changed: [testbed-manager] 2025-11-01 12:55:55.090826 | orchestrator | 2025-11-01 12:55:55.090838 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-11-01 12:55:55.090849 | orchestrator | Saturday 01 November 2025 12:55:11 +0000 (0:00:04.904) 0:00:53.538 ***** 2025-11-01 12:55:55.090860 | orchestrator | ok: [testbed-manager] 2025-11-01 12:55:55.090871 | orchestrator | 2025-11-01 12:55:55.090882 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-11-01 12:55:55.090892 | orchestrator | Saturday 01 November 2025 12:55:13 +0000 (0:00:01.948) 0:00:55.487 ***** 2025-11-01 12:55:55.090903 | orchestrator | changed: [testbed-manager] 2025-11-01 12:55:55.090914 | orchestrator | 2025-11-01 12:55:55.090925 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-11-01 12:55:55.090936 | orchestrator | Saturday 01 November 2025 12:55:18 +0000 (0:00:04.752) 0:01:00.239 ***** 2025-11-01 12:55:55.090946 | orchestrator | changed: [testbed-manager] 2025-11-01 12:55:55.090957 | orchestrator | 2025-11-01 12:55:55.090968 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-11-01 12:55:55.090979 | orchestrator | Saturday 01 November 2025 12:55:19 +0000 (0:00:01.351) 0:01:01.591 ***** 2025-11-01 12:55:55.090989 | orchestrator | changed: [testbed-manager] 2025-11-01 12:55:55.091000 | orchestrator | 2025-11-01 12:55:55.091011 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-11-01 12:55:55.091022 | orchestrator | Saturday 01 November 2025 12:55:20 +0000 (0:00:00.845) 0:01:02.437 ***** 2025-11-01 12:55:55.091033 | orchestrator | ok: [testbed-manager] 2025-11-01 12:55:55.091043 | orchestrator | 2025-11-01 12:55:55.091054 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:55:55.091066 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:55:55.091076 | orchestrator | 2025-11-01 12:55:55.091087 | orchestrator | 2025-11-01 12:55:55.091098 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:55:55.091109 | orchestrator | Saturday 01 November 2025 12:55:21 +0000 (0:00:00.587) 0:01:03.024 ***** 2025-11-01 12:55:55.091120 | orchestrator | =============================================================================== 2025-11-01 12:55:55.091130 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 38.25s 2025-11-01 12:55:55.091141 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 5.48s 2025-11-01 12:55:55.091152 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 4.91s 2025-11-01 12:55:55.091163 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 4.75s 2025-11-01 12:55:55.091174 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.41s 2025-11-01 12:55:55.091184 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.95s 2025-11-01 12:55:55.091223 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.35s 2025-11-01 12:55:55.091235 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.25s 2025-11-01 12:55:55.091246 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.85s 2025-11-01 12:55:55.091257 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.59s 2025-11-01 12:55:55.091267 | orchestrator | 2025-11-01 12:55:55.091278 | orchestrator | 2025-11-01 12:55:55.091289 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 12:55:55.091300 | orchestrator | 2025-11-01 12:55:55.091311 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 12:55:55.091322 | orchestrator | Saturday 01 November 2025 12:54:18 +0000 (0:00:01.447) 0:00:01.447 ***** 2025-11-01 12:55:55.091332 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-11-01 12:55:55.091343 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-11-01 12:55:55.091353 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-11-01 12:55:55.091364 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-11-01 12:55:55.091404 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-11-01 12:55:55.091416 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-11-01 12:55:55.091427 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-11-01 12:55:55.091438 | orchestrator | 2025-11-01 12:55:55.091449 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-11-01 12:55:55.091460 | orchestrator | 2025-11-01 12:55:55.091471 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-11-01 12:55:55.091481 | orchestrator | Saturday 01 November 2025 12:54:21 +0000 (0:00:02.684) 0:00:04.131 ***** 2025-11-01 12:55:55.091505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 12:55:55.091525 | orchestrator | 2025-11-01 12:55:55.091536 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-11-01 12:55:55.091547 | orchestrator | Saturday 01 November 2025 12:54:25 +0000 (0:00:04.070) 0:00:08.202 ***** 2025-11-01 12:55:55.091557 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:55:55.091568 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:55:55.091579 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:55:55.091590 | orchestrator | ok: [testbed-manager] 2025-11-01 12:55:55.091601 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:55:55.091617 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:55:55.091629 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:55:55.091639 | orchestrator | 2025-11-01 12:55:55.091650 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-11-01 12:55:55.091661 | orchestrator | Saturday 01 November 2025 12:54:30 +0000 (0:00:05.155) 0:00:13.357 ***** 2025-11-01 12:55:55.091672 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:55:55.091682 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:55:55.091693 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:55:55.091704 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:55:55.091715 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:55:55.091725 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:55:55.091736 | orchestrator | ok: [testbed-manager] 2025-11-01 12:55:55.091747 | orchestrator | 2025-11-01 12:55:55.091758 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-11-01 12:55:55.091768 | orchestrator | Saturday 01 November 2025 12:54:35 +0000 (0:00:05.032) 0:00:18.390 ***** 2025-11-01 12:55:55.091779 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:55:55.091790 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:55:55.091801 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:55:55.091812 | orchestrator | changed: [testbed-manager] 2025-11-01 12:55:55.091829 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:55:55.091840 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:55:55.091851 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:55:55.091862 | orchestrator | 2025-11-01 12:55:55.091873 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-11-01 12:55:55.091884 | orchestrator | Saturday 01 November 2025 12:54:39 +0000 (0:00:03.274) 0:00:21.667 ***** 2025-11-01 12:55:55.091894 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:55:55.091905 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:55:55.091916 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:55:55.091927 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:55:55.091937 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:55:55.091948 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:55:55.091959 | orchestrator | changed: [testbed-manager] 2025-11-01 12:55:55.091970 | orchestrator | 2025-11-01 12:55:55.091981 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-11-01 12:55:55.091991 | orchestrator | Saturday 01 November 2025 12:54:50 +0000 (0:00:11.373) 0:00:33.040 ***** 2025-11-01 12:55:55.092002 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:55:55.092017 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:55:55.092029 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:55:55.092039 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:55:55.092050 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:55:55.092061 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:55:55.092072 | orchestrator | changed: [testbed-manager] 2025-11-01 12:55:55.092082 | orchestrator | 2025-11-01 12:55:55.092093 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-11-01 12:55:55.092104 | orchestrator | Saturday 01 November 2025 12:55:23 +0000 (0:00:32.943) 0:01:05.984 ***** 2025-11-01 12:55:55.092116 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 12:55:55.092129 | orchestrator | 2025-11-01 12:55:55.092140 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-11-01 12:55:55.092151 | orchestrator | Saturday 01 November 2025 12:55:25 +0000 (0:00:01.849) 0:01:07.834 ***** 2025-11-01 12:55:55.092161 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-11-01 12:55:55.092173 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-11-01 12:55:55.092184 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-11-01 12:55:55.092194 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-11-01 12:55:55.092250 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-11-01 12:55:55.092262 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-11-01 12:55:55.092273 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-11-01 12:55:55.092283 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-11-01 12:55:55.092294 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-11-01 12:55:55.092305 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-11-01 12:55:55.092316 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-11-01 12:55:55.092326 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-11-01 12:55:55.092337 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-11-01 12:55:55.092348 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-11-01 12:55:55.092358 | orchestrator | 2025-11-01 12:55:55.092369 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-11-01 12:55:55.092380 | orchestrator | Saturday 01 November 2025 12:55:33 +0000 (0:00:07.873) 0:01:15.708 ***** 2025-11-01 12:55:55.092391 | orchestrator | ok: [testbed-manager] 2025-11-01 12:55:55.092402 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:55:55.092413 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:55:55.092430 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:55:55.092441 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:55:55.092452 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:55:55.092463 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:55:55.092473 | orchestrator | 2025-11-01 12:55:55.092484 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-11-01 12:55:55.092495 | orchestrator | Saturday 01 November 2025 12:55:34 +0000 (0:00:01.841) 0:01:17.550 ***** 2025-11-01 12:55:55.092506 | orchestrator | changed: [testbed-manager] 2025-11-01 12:55:55.092517 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:55:55.092527 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:55:55.092538 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:55:55.092549 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:55:55.092560 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:55:55.092570 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:55:55.092581 | orchestrator | 2025-11-01 12:55:55.092592 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-11-01 12:55:55.092609 | orchestrator | Saturday 01 November 2025 12:55:37 +0000 (0:00:02.438) 0:01:19.988 ***** 2025-11-01 12:55:55.092621 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:55:55.092631 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:55:55.092642 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:55:55.092653 | orchestrator | ok: [testbed-manager] 2025-11-01 12:55:55.092664 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:55:55.092674 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:55:55.092685 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:55:55.092696 | orchestrator | 2025-11-01 12:55:55.092706 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-11-01 12:55:55.092717 | orchestrator | Saturday 01 November 2025 12:55:39 +0000 (0:00:02.415) 0:01:22.403 ***** 2025-11-01 12:55:55.092728 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:55:55.092739 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:55:55.092749 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:55:55.092760 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:55:55.092770 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:55:55.092781 | orchestrator | ok: [testbed-manager] 2025-11-01 12:55:55.092792 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:55:55.092802 | orchestrator | 2025-11-01 12:55:55.092814 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-11-01 12:55:55.092824 | orchestrator | Saturday 01 November 2025 12:55:43 +0000 (0:00:04.198) 0:01:26.602 ***** 2025-11-01 12:55:55.092836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-11-01 12:55:55.092848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 12:55:55.092859 | orchestrator | 2025-11-01 12:55:55.092870 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-11-01 12:55:55.092881 | orchestrator | Saturday 01 November 2025 12:55:45 +0000 (0:00:01.545) 0:01:28.147 ***** 2025-11-01 12:55:55.092892 | orchestrator | changed: [testbed-manager] 2025-11-01 12:55:55.092902 | orchestrator | 2025-11-01 12:55:55.092913 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-11-01 12:55:55.092929 | orchestrator | Saturday 01 November 2025 12:55:47 +0000 (0:00:02.364) 0:01:30.512 ***** 2025-11-01 12:55:55.092940 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:55:55.092951 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:55:55.092961 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:55:55.092972 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:55:55.092983 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:55:55.092994 | orchestrator | changed: [testbed-manager] 2025-11-01 12:55:55.093004 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:55:55.093015 | orchestrator | 2025-11-01 12:55:55.093026 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:55:55.093048 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:55:55.093060 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:55:55.093071 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:55:55.093082 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:55:55.093093 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:55:55.093104 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:55:55.093114 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:55:55.093125 | orchestrator | 2025-11-01 12:55:55.093136 | orchestrator | 2025-11-01 12:55:55.093147 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:55:55.093158 | orchestrator | Saturday 01 November 2025 12:55:51 +0000 (0:00:03.767) 0:01:34.280 ***** 2025-11-01 12:55:55.093169 | orchestrator | =============================================================================== 2025-11-01 12:55:55.093180 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 32.94s 2025-11-01 12:55:55.093190 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.37s 2025-11-01 12:55:55.093246 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 7.87s 2025-11-01 12:55:55.093259 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 5.16s 2025-11-01 12:55:55.093270 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 5.03s 2025-11-01 12:55:55.093280 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 4.20s 2025-11-01 12:55:55.093291 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 4.07s 2025-11-01 12:55:55.093302 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.77s 2025-11-01 12:55:55.093312 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.27s 2025-11-01 12:55:55.093323 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.68s 2025-11-01 12:55:55.093334 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.44s 2025-11-01 12:55:55.093351 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.42s 2025-11-01 12:55:55.093362 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.36s 2025-11-01 12:55:55.093373 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.85s 2025-11-01 12:55:55.093383 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.84s 2025-11-01 12:55:55.093394 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.55s 2025-11-01 12:55:58.139481 | orchestrator | 2025-11-01 12:55:58 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:55:58.145059 | orchestrator | 2025-11-01 12:55:58 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:55:58.148524 | orchestrator | 2025-11-01 12:55:58 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:55:58.149954 | orchestrator | 2025-11-01 12:55:58 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:55:58.152925 | orchestrator | 2025-11-01 12:55:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:56:01.191843 | orchestrator | 2025-11-01 12:56:01 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:56:01.193965 | orchestrator | 2025-11-01 12:56:01 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state STARTED 2025-11-01 12:56:01.195107 | orchestrator | 2025-11-01 12:56:01 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:56:01.196627 | orchestrator | 2025-11-01 12:56:01 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:56:01.196668 | orchestrator | 2025-11-01 12:56:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:56:04.242391 | orchestrator | 2025-11-01 12:56:04 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:56:04.242480 | orchestrator | 2025-11-01 12:56:04 | INFO  | Task 7118cd7a-87ca-4cd6-9bc0-2762999c1406 is in state SUCCESS 2025-11-01 12:56:04.244268 | orchestrator | 2025-11-01 12:56:04 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:56:04.245343 | orchestrator | 2025-11-01 12:56:04 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:56:04.245514 | orchestrator | 2025-11-01 12:56:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:56:07.274934 | orchestrator | 2025-11-01 12:56:07 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:56:07.276798 | orchestrator | 2025-11-01 12:56:07 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:56:07.278807 | orchestrator | 2025-11-01 12:56:07 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:56:07.278954 | orchestrator | 2025-11-01 12:56:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:56:10.318903 | orchestrator | 2025-11-01 12:56:10 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:56:10.320144 | orchestrator | 2025-11-01 12:56:10 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:56:10.323443 | orchestrator | 2025-11-01 12:56:10 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:56:10.323472 | orchestrator | 2025-11-01 12:56:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:56:13.370784 | orchestrator | 2025-11-01 12:56:13 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:56:13.377942 | orchestrator | 2025-11-01 12:56:13 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:56:13.377972 | orchestrator | 2025-11-01 12:56:13 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:56:13.377984 | orchestrator | 2025-11-01 12:56:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:56:16.466130 | orchestrator | 2025-11-01 12:56:16 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:56:16.467052 | orchestrator | 2025-11-01 12:56:16 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:56:16.469216 | orchestrator | 2025-11-01 12:56:16 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:56:16.469241 | orchestrator | 2025-11-01 12:56:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:56:19.527545 | orchestrator | 2025-11-01 12:56:19 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:56:19.528808 | orchestrator | 2025-11-01 12:56:19 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:56:19.530339 | orchestrator | 2025-11-01 12:56:19 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:56:19.530368 | orchestrator | 2025-11-01 12:56:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:56:22.564819 | orchestrator | 2025-11-01 12:56:22 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:56:22.565802 | orchestrator | 2025-11-01 12:56:22 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:56:22.572270 | orchestrator | 2025-11-01 12:56:22 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:56:22.573018 | orchestrator | 2025-11-01 12:56:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:56:25.616834 | orchestrator | 2025-11-01 12:56:25 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:56:25.618700 | orchestrator | 2025-11-01 12:56:25 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:56:25.620533 | orchestrator | 2025-11-01 12:56:25 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:56:25.620911 | orchestrator | 2025-11-01 12:56:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:56:28.664042 | orchestrator | 2025-11-01 12:56:28 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:56:28.664920 | orchestrator | 2025-11-01 12:56:28 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:56:28.666389 | orchestrator | 2025-11-01 12:56:28 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:56:28.666417 | orchestrator | 2025-11-01 12:56:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:56:31.710558 | orchestrator | 2025-11-01 12:56:31 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:56:31.713251 | orchestrator | 2025-11-01 12:56:31 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:56:31.714600 | orchestrator | 2025-11-01 12:56:31 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:56:31.714637 | orchestrator | 2025-11-01 12:56:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:56:34.762083 | orchestrator | 2025-11-01 12:56:34 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:56:34.764751 | orchestrator | 2025-11-01 12:56:34 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:56:34.766126 | orchestrator | 2025-11-01 12:56:34 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:56:34.766150 | orchestrator | 2025-11-01 12:56:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:56:37.805529 | orchestrator | 2025-11-01 12:56:37 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:56:37.807037 | orchestrator | 2025-11-01 12:56:37 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:56:37.808797 | orchestrator | 2025-11-01 12:56:37 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:56:37.811639 | orchestrator | 2025-11-01 12:56:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:56:40.851559 | orchestrator | 2025-11-01 12:56:40 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:56:40.855759 | orchestrator | 2025-11-01 12:56:40 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:56:40.856842 | orchestrator | 2025-11-01 12:56:40 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:56:40.857338 | orchestrator | 2025-11-01 12:56:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:56:43.905526 | orchestrator | 2025-11-01 12:56:43 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:56:43.908617 | orchestrator | 2025-11-01 12:56:43 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:56:43.912702 | orchestrator | 2025-11-01 12:56:43 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:56:43.912727 | orchestrator | 2025-11-01 12:56:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:56:46.956879 | orchestrator | 2025-11-01 12:56:46 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:56:46.957405 | orchestrator | 2025-11-01 12:56:46 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:56:46.958567 | orchestrator | 2025-11-01 12:56:46 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:56:46.958591 | orchestrator | 2025-11-01 12:56:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:56:49.997303 | orchestrator | 2025-11-01 12:56:49 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:56:49.999622 | orchestrator | 2025-11-01 12:56:49 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:56:50.001818 | orchestrator | 2025-11-01 12:56:50 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:56:50.002228 | orchestrator | 2025-11-01 12:56:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:56:53.055401 | orchestrator | 2025-11-01 12:56:53 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:56:53.058849 | orchestrator | 2025-11-01 12:56:53 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:56:53.061245 | orchestrator | 2025-11-01 12:56:53 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:56:53.062228 | orchestrator | 2025-11-01 12:56:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:56:56.106626 | orchestrator | 2025-11-01 12:56:56 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:56:56.107888 | orchestrator | 2025-11-01 12:56:56 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:56:56.112676 | orchestrator | 2025-11-01 12:56:56 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:56:56.112701 | orchestrator | 2025-11-01 12:56:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:56:59.153901 | orchestrator | 2025-11-01 12:56:59 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:56:59.155779 | orchestrator | 2025-11-01 12:56:59 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state STARTED 2025-11-01 12:56:59.160024 | orchestrator | 2025-11-01 12:56:59 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:56:59.160131 | orchestrator | 2025-11-01 12:56:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:57:02.211336 | orchestrator | 2025-11-01 12:57:02 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:57:02.214077 | orchestrator | 2025-11-01 12:57:02 | INFO  | Task 9cd98000-3d72-427a-be60-f41d7f8329cf is in state STARTED 2025-11-01 12:57:02.214891 | orchestrator | 2025-11-01 12:57:02 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:57:02.222761 | orchestrator | 2025-11-01 12:57:02 | INFO  | Task 16ad4ddb-3582-40d3-926f-4e614b3560b3 is in state SUCCESS 2025-11-01 12:57:02.225847 | orchestrator | 2025-11-01 12:57:02.225896 | orchestrator | 2025-11-01 12:57:02.225908 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-11-01 12:57:02.225920 | orchestrator | 2025-11-01 12:57:02.225931 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-11-01 12:57:02.225943 | orchestrator | Saturday 01 November 2025 12:54:48 +0000 (0:00:00.398) 0:00:00.398 ***** 2025-11-01 12:57:02.225954 | orchestrator | ok: [testbed-manager] 2025-11-01 12:57:02.225966 | orchestrator | 2025-11-01 12:57:02.225977 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-11-01 12:57:02.225987 | orchestrator | Saturday 01 November 2025 12:54:49 +0000 (0:00:00.958) 0:00:01.357 ***** 2025-11-01 12:57:02.225998 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-11-01 12:57:02.226009 | orchestrator | 2025-11-01 12:57:02.226083 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-11-01 12:57:02.226096 | orchestrator | Saturday 01 November 2025 12:54:50 +0000 (0:00:00.869) 0:00:02.226 ***** 2025-11-01 12:57:02.226107 | orchestrator | changed: [testbed-manager] 2025-11-01 12:57:02.226119 | orchestrator | 2025-11-01 12:57:02.226131 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-11-01 12:57:02.226142 | orchestrator | Saturday 01 November 2025 12:54:52 +0000 (0:00:01.345) 0:00:03.572 ***** 2025-11-01 12:57:02.226153 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-11-01 12:57:02.226163 | orchestrator | ok: [testbed-manager] 2025-11-01 12:57:02.226174 | orchestrator | 2025-11-01 12:57:02.226185 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-11-01 12:57:02.226196 | orchestrator | Saturday 01 November 2025 12:55:55 +0000 (0:01:03.734) 0:01:07.306 ***** 2025-11-01 12:57:02.226239 | orchestrator | changed: [testbed-manager] 2025-11-01 12:57:02.226250 | orchestrator | 2025-11-01 12:57:02.226261 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:57:02.226272 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:57:02.226309 | orchestrator | 2025-11-01 12:57:02.226322 | orchestrator | 2025-11-01 12:57:02.226334 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:57:02.226346 | orchestrator | Saturday 01 November 2025 12:56:02 +0000 (0:00:06.647) 0:01:13.954 ***** 2025-11-01 12:57:02.226357 | orchestrator | =============================================================================== 2025-11-01 12:57:02.226369 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 63.73s 2025-11-01 12:57:02.226380 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 6.65s 2025-11-01 12:57:02.226391 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.35s 2025-11-01 12:57:02.226403 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.96s 2025-11-01 12:57:02.226414 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.87s 2025-11-01 12:57:02.226425 | orchestrator | 2025-11-01 12:57:02.226436 | orchestrator | 2025-11-01 12:57:02.226447 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-11-01 12:57:02.226459 | orchestrator | 2025-11-01 12:57:02.226470 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-11-01 12:57:02.226481 | orchestrator | Saturday 01 November 2025 12:54:02 +0000 (0:00:00.416) 0:00:00.416 ***** 2025-11-01 12:57:02.226492 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 12:57:02.226506 | orchestrator | 2025-11-01 12:57:02.226517 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-11-01 12:57:02.226545 | orchestrator | Saturday 01 November 2025 12:54:04 +0000 (0:00:02.098) 0:00:02.515 ***** 2025-11-01 12:57:02.226556 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-01 12:57:02.226568 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-01 12:57:02.226579 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-01 12:57:02.226590 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-01 12:57:02.226609 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-01 12:57:02.226620 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-01 12:57:02.226632 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-01 12:57:02.226645 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-01 12:57:02.226656 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-01 12:57:02.226667 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-01 12:57:02.226678 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-01 12:57:02.226690 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-01 12:57:02.226701 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-11-01 12:57:02.226712 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-01 12:57:02.226723 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-01 12:57:02.226735 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-01 12:57:02.226761 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-01 12:57:02.226773 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-01 12:57:02.226785 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-11-01 12:57:02.226796 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-01 12:57:02.226807 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-11-01 12:57:02.226818 | orchestrator | 2025-11-01 12:57:02.226830 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-11-01 12:57:02.226841 | orchestrator | Saturday 01 November 2025 12:54:11 +0000 (0:00:06.384) 0:00:08.899 ***** 2025-11-01 12:57:02.226852 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 12:57:02.226865 | orchestrator | 2025-11-01 12:57:02.226876 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-11-01 12:57:02.226887 | orchestrator | Saturday 01 November 2025 12:54:13 +0000 (0:00:02.075) 0:00:10.975 ***** 2025-11-01 12:57:02.226904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.226920 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.226941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.226953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.226971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.227006 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.227019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.227031 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.227043 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.227061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.227074 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.227091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.227104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.227126 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.227139 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.227152 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.227170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.227182 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.227194 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.227259 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.227271 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.227283 | orchestrator | 2025-11-01 12:57:02.227294 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-11-01 12:57:02.227305 | orchestrator | Saturday 01 November 2025 12:54:19 +0000 (0:00:06.419) 0:00:17.394 ***** 2025-11-01 12:57:02.227322 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 12:57:02.227335 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227346 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227365 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:57:02.227376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 12:57:02.227388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227410 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:57:02.227425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 12:57:02.227437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227467 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:57:02.227478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 12:57:02.227498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227521 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:57:02.227532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 12:57:02.227548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227571 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:57:02.227582 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 12:57:02.227600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 12:57:02.227642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227653 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:57:02.227664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227676 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:57:02.227687 | orchestrator | 2025-11-01 12:57:02.227697 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-11-01 12:57:02.227708 | orchestrator | Saturday 01 November 2025 12:54:22 +0000 (0:00:02.409) 0:00:19.804 ***** 2025-11-01 12:57:02.227724 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 12:57:02.227736 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227753 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 12:57:02.227788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227810 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:57:02.227821 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:57:02.227832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 12:57:02.227844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 12:57:02.227859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227917 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:57:02.227928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 12:57:02.227940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.227962 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:57:02.227973 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:57:02.227984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 12:57:02.227995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.228019 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.228030 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:57:02.228042 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-11-01 12:57:02.228053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.228069 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.228080 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:57:02.228091 | orchestrator | 2025-11-01 12:57:02.228102 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-11-01 12:57:02.228113 | orchestrator | Saturday 01 November 2025 12:54:26 +0000 (0:00:04.666) 0:00:24.470 ***** 2025-11-01 12:57:02.228124 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:57:02.228135 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:57:02.228146 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:57:02.228157 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:57:02.228185 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:57:02.228197 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:57:02.228225 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:57:02.228236 | orchestrator | 2025-11-01 12:57:02.228247 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-11-01 12:57:02.228258 | orchestrator | Saturday 01 November 2025 12:54:28 +0000 (0:00:01.502) 0:00:25.972 ***** 2025-11-01 12:57:02.228269 | orchestrator | skipping: [testbed-manager] 2025-11-01 12:57:02.228279 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:57:02.228290 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:57:02.228300 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:57:02.228311 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:57:02.228322 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:57:02.228332 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:57:02.228343 | orchestrator | 2025-11-01 12:57:02.228354 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-11-01 12:57:02.228364 | orchestrator | Saturday 01 November 2025 12:54:29 +0000 (0:00:01.531) 0:00:27.504 ***** 2025-11-01 12:57:02.228380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.228399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.228421 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.228433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.228445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.228456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.228468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.228484 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.228501 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.228513 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.228530 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.228542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.228553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.228565 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.228576 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.228598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.228610 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.228633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.228645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.228656 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.228668 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.228678 | orchestrator | 2025-11-01 12:57:02.228689 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-11-01 12:57:02.228700 | orchestrator | Saturday 01 November 2025 12:54:40 +0000 (0:00:10.936) 0:00:38.441 ***** 2025-11-01 12:57:02.228711 | orchestrator | [WARNING]: Skipped 2025-11-01 12:57:02.228722 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-11-01 12:57:02.228733 | orchestrator | to this access issue: 2025-11-01 12:57:02.228744 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-11-01 12:57:02.228755 | orchestrator | directory 2025-11-01 12:57:02.228766 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 12:57:02.228776 | orchestrator | 2025-11-01 12:57:02.228787 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-11-01 12:57:02.228808 | orchestrator | Saturday 01 November 2025 12:54:44 +0000 (0:00:03.345) 0:00:41.786 ***** 2025-11-01 12:57:02.228819 | orchestrator | [WARNING]: Skipped 2025-11-01 12:57:02.228829 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-11-01 12:57:02.228840 | orchestrator | to this access issue: 2025-11-01 12:57:02.228851 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-11-01 12:57:02.228861 | orchestrator | directory 2025-11-01 12:57:02.228872 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 12:57:02.228883 | orchestrator | 2025-11-01 12:57:02.228894 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-11-01 12:57:02.228904 | orchestrator | Saturday 01 November 2025 12:54:46 +0000 (0:00:02.261) 0:00:44.048 ***** 2025-11-01 12:57:02.228929 | orchestrator | [WARNING]: Skipped 2025-11-01 12:57:02.228940 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-11-01 12:57:02.228951 | orchestrator | to this access issue: 2025-11-01 12:57:02.228961 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-11-01 12:57:02.228972 | orchestrator | directory 2025-11-01 12:57:02.228983 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 12:57:02.228994 | orchestrator | 2025-11-01 12:57:02.229004 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-11-01 12:57:02.229015 | orchestrator | Saturday 01 November 2025 12:54:47 +0000 (0:00:01.613) 0:00:45.661 ***** 2025-11-01 12:57:02.229025 | orchestrator | [WARNING]: Skipped 2025-11-01 12:57:02.229036 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-11-01 12:57:02.229052 | orchestrator | to this access issue: 2025-11-01 12:57:02.229063 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-11-01 12:57:02.229073 | orchestrator | directory 2025-11-01 12:57:02.229084 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 12:57:02.229095 | orchestrator | 2025-11-01 12:57:02.229106 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-11-01 12:57:02.229116 | orchestrator | Saturday 01 November 2025 12:54:48 +0000 (0:00:01.053) 0:00:46.715 ***** 2025-11-01 12:57:02.229127 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:57:02.229138 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:57:02.229148 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:57:02.229159 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:57:02.229169 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:57:02.229180 | orchestrator | changed: [testbed-manager] 2025-11-01 12:57:02.229190 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:57:02.229217 | orchestrator | 2025-11-01 12:57:02.229228 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-11-01 12:57:02.229239 | orchestrator | Saturday 01 November 2025 12:54:55 +0000 (0:00:06.224) 0:00:52.940 ***** 2025-11-01 12:57:02.229250 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-01 12:57:02.229261 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-01 12:57:02.229272 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-01 12:57:02.229289 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-01 12:57:02.229300 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-01 12:57:02.229311 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-01 12:57:02.229322 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-11-01 12:57:02.229332 | orchestrator | 2025-11-01 12:57:02.229343 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-11-01 12:57:02.229361 | orchestrator | Saturday 01 November 2025 12:55:00 +0000 (0:00:05.446) 0:00:58.386 ***** 2025-11-01 12:57:02.229372 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:57:02.229383 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:57:02.229394 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:57:02.229404 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:57:02.229415 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:57:02.229426 | orchestrator | changed: [testbed-manager] 2025-11-01 12:57:02.229436 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:57:02.229447 | orchestrator | 2025-11-01 12:57:02.229458 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-11-01 12:57:02.229468 | orchestrator | Saturday 01 November 2025 12:55:06 +0000 (0:00:06.017) 0:01:04.403 ***** 2025-11-01 12:57:02.229480 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.229491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.229503 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.229524 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.229535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.229553 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.229574 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.229585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.229597 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.229608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.229619 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.229635 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.229647 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.229664 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.229682 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.229693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.229705 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.229716 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 12:57:02.229727 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.229743 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.229754 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.229765 | orchestrator | 2025-11-01 12:57:02.229782 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-11-01 12:57:02.229793 | orchestrator | Saturday 01 November 2025 12:55:11 +0000 (0:00:05.182) 0:01:09.585 ***** 2025-11-01 12:57:02.229804 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-01 12:57:02.229814 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-01 12:57:02.229825 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-01 12:57:02.229970 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-01 12:57:02.229986 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-01 12:57:02.229997 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-01 12:57:02.230008 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-11-01 12:57:02.230057 | orchestrator | 2025-11-01 12:57:02.230072 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-11-01 12:57:02.230083 | orchestrator | Saturday 01 November 2025 12:55:16 +0000 (0:00:04.663) 0:01:14.249 ***** 2025-11-01 12:57:02.230094 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-01 12:57:02.230104 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-01 12:57:02.230115 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-01 12:57:02.230125 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-01 12:57:02.230136 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-01 12:57:02.230147 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-01 12:57:02.230157 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-11-01 12:57:02.230168 | orchestrator | 2025-11-01 12:57:02.230179 | orchestrator | TASK [common : Check common containers] **************************************** 2025-11-01 12:57:02.230189 | orchestrator | Saturday 01 November 2025 12:55:20 +0000 (0:00:03.729) 0:01:17.979 ***** 2025-11-01 12:57:02.230256 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.230271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.230282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.230309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.230329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.230341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.230353 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.230364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.230376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.230387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.230398 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.230421 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.230433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.230450 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-11-01 12:57:02.230462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.230473 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.230485 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.230572 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.230597 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.230609 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.230619 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 12:57:02.230629 | orchestrator | 2025-11-01 12:57:02.230644 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-11-01 12:57:02.230654 | orchestrator | Saturday 01 November 2025 12:55:25 +0000 (0:00:05.001) 0:01:22.980 ***** 2025-11-01 12:57:02.230664 | orchestrator | changed: [testbed-manager] 2025-11-01 12:57:02.230674 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:57:02.230683 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:57:02.230693 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:57:02.230702 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:57:02.230712 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:57:02.230721 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:57:02.230731 | orchestrator | 2025-11-01 12:57:02.230740 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-11-01 12:57:02.230750 | orchestrator | Saturday 01 November 2025 12:55:28 +0000 (0:00:03.252) 0:01:26.233 ***** 2025-11-01 12:57:02.230759 | orchestrator | changed: [testbed-manager] 2025-11-01 12:57:02.230769 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:57:02.230778 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:57:02.230788 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:57:02.230797 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:57:02.230807 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:57:02.230816 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:57:02.230825 | orchestrator | 2025-11-01 12:57:02.230835 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-01 12:57:02.230844 | orchestrator | Saturday 01 November 2025 12:55:31 +0000 (0:00:03.275) 0:01:29.508 ***** 2025-11-01 12:57:02.230854 | orchestrator | 2025-11-01 12:57:02.230863 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-01 12:57:02.230873 | orchestrator | Saturday 01 November 2025 12:55:31 +0000 (0:00:00.119) 0:01:29.628 ***** 2025-11-01 12:57:02.230883 | orchestrator | 2025-11-01 12:57:02.230892 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-01 12:57:02.230902 | orchestrator | Saturday 01 November 2025 12:55:31 +0000 (0:00:00.102) 0:01:29.731 ***** 2025-11-01 12:57:02.230911 | orchestrator | 2025-11-01 12:57:02.230921 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-01 12:57:02.230931 | orchestrator | Saturday 01 November 2025 12:55:32 +0000 (0:00:00.392) 0:01:30.123 ***** 2025-11-01 12:57:02.230946 | orchestrator | 2025-11-01 12:57:02.230955 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-01 12:57:02.230965 | orchestrator | Saturday 01 November 2025 12:55:32 +0000 (0:00:00.112) 0:01:30.236 ***** 2025-11-01 12:57:02.230974 | orchestrator | 2025-11-01 12:57:02.230984 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-01 12:57:02.230993 | orchestrator | Saturday 01 November 2025 12:55:32 +0000 (0:00:00.094) 0:01:30.330 ***** 2025-11-01 12:57:02.231003 | orchestrator | 2025-11-01 12:57:02.231012 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-11-01 12:57:02.231022 | orchestrator | Saturday 01 November 2025 12:55:32 +0000 (0:00:00.095) 0:01:30.426 ***** 2025-11-01 12:57:02.231031 | orchestrator | 2025-11-01 12:57:02.231041 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-11-01 12:57:02.231050 | orchestrator | Saturday 01 November 2025 12:55:32 +0000 (0:00:00.116) 0:01:30.542 ***** 2025-11-01 12:57:02.231059 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:57:02.231069 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:57:02.231079 | orchestrator | changed: [testbed-manager] 2025-11-01 12:57:02.231088 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:57:02.231098 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:57:02.231107 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:57:02.231117 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:57:02.231126 | orchestrator | 2025-11-01 12:57:02.231136 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-11-01 12:57:02.231145 | orchestrator | Saturday 01 November 2025 12:56:12 +0000 (0:00:39.448) 0:02:09.991 ***** 2025-11-01 12:57:02.231155 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:57:02.231164 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:57:02.231174 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:57:02.231183 | orchestrator | changed: [testbed-manager] 2025-11-01 12:57:02.231193 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:57:02.231221 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:57:02.231232 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:57:02.231242 | orchestrator | 2025-11-01 12:57:02.231253 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-11-01 12:57:02.231264 | orchestrator | Saturday 01 November 2025 12:56:47 +0000 (0:00:35.614) 0:02:45.606 ***** 2025-11-01 12:57:02.231276 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:57:02.231286 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:57:02.231302 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:57:02.231313 | orchestrator | ok: [testbed-manager] 2025-11-01 12:57:02.231324 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:57:02.231334 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:57:02.231345 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:57:02.231356 | orchestrator | 2025-11-01 12:57:02.231366 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-11-01 12:57:02.231378 | orchestrator | Saturday 01 November 2025 12:56:50 +0000 (0:00:02.377) 0:02:47.983 ***** 2025-11-01 12:57:02.231388 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:57:02.231399 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:57:02.231410 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:57:02.231421 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:57:02.231432 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:57:02.231443 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:57:02.231454 | orchestrator | changed: [testbed-manager] 2025-11-01 12:57:02.231464 | orchestrator | 2025-11-01 12:57:02.231475 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:57:02.231487 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-01 12:57:02.231498 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-01 12:57:02.231515 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-01 12:57:02.231532 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-01 12:57:02.231544 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-01 12:57:02.231555 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-01 12:57:02.231564 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-11-01 12:57:02.231574 | orchestrator | 2025-11-01 12:57:02.231583 | orchestrator | 2025-11-01 12:57:02.231593 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:57:02.231602 | orchestrator | Saturday 01 November 2025 12:56:59 +0000 (0:00:09.154) 0:02:57.137 ***** 2025-11-01 12:57:02.231612 | orchestrator | =============================================================================== 2025-11-01 12:57:02.231621 | orchestrator | common : Restart fluentd container ------------------------------------- 39.45s 2025-11-01 12:57:02.231631 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 35.61s 2025-11-01 12:57:02.231640 | orchestrator | common : Copying over config.json files for services ------------------- 10.94s 2025-11-01 12:57:02.231650 | orchestrator | common : Restart cron container ----------------------------------------- 9.15s 2025-11-01 12:57:02.231659 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.42s 2025-11-01 12:57:02.231669 | orchestrator | common : Ensuring config directories exist ------------------------------ 6.38s 2025-11-01 12:57:02.231678 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 6.22s 2025-11-01 12:57:02.231688 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 6.02s 2025-11-01 12:57:02.231697 | orchestrator | common : Copying over cron logrotate config file ------------------------ 5.45s 2025-11-01 12:57:02.231706 | orchestrator | common : Ensuring config directories have correct owner and permission --- 5.18s 2025-11-01 12:57:02.231716 | orchestrator | common : Check common containers ---------------------------------------- 5.00s 2025-11-01 12:57:02.231725 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 4.67s 2025-11-01 12:57:02.231735 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.66s 2025-11-01 12:57:02.231744 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.73s 2025-11-01 12:57:02.231754 | orchestrator | common : Find custom fluentd input config files ------------------------- 3.35s 2025-11-01 12:57:02.231763 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 3.28s 2025-11-01 12:57:02.231773 | orchestrator | common : Creating log volume -------------------------------------------- 3.25s 2025-11-01 12:57:02.231782 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.41s 2025-11-01 12:57:02.231792 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.38s 2025-11-01 12:57:02.231801 | orchestrator | common : Find custom fluentd filter config files ------------------------ 2.26s 2025-11-01 12:57:02.231811 | orchestrator | 2025-11-01 12:57:02 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:57:02.231821 | orchestrator | 2025-11-01 12:57:02 | INFO  | Task 0d2e46b5-4a7f-4edb-9a5c-9e055709606e is in state STARTED 2025-11-01 12:57:02.231831 | orchestrator | 2025-11-01 12:57:02 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:57:02.231840 | orchestrator | 2025-11-01 12:57:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:57:05.269489 | orchestrator | 2025-11-01 12:57:05 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:57:05.270150 | orchestrator | 2025-11-01 12:57:05 | INFO  | Task 9cd98000-3d72-427a-be60-f41d7f8329cf is in state STARTED 2025-11-01 12:57:05.271366 | orchestrator | 2025-11-01 12:57:05 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:57:05.272349 | orchestrator | 2025-11-01 12:57:05 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:57:05.273057 | orchestrator | 2025-11-01 12:57:05 | INFO  | Task 0d2e46b5-4a7f-4edb-9a5c-9e055709606e is in state STARTED 2025-11-01 12:57:05.273894 | orchestrator | 2025-11-01 12:57:05 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:57:05.273915 | orchestrator | 2025-11-01 12:57:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:57:08.319119 | orchestrator | 2025-11-01 12:57:08 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:57:08.321304 | orchestrator | 2025-11-01 12:57:08 | INFO  | Task 9cd98000-3d72-427a-be60-f41d7f8329cf is in state STARTED 2025-11-01 12:57:08.321331 | orchestrator | 2025-11-01 12:57:08 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:57:08.322096 | orchestrator | 2025-11-01 12:57:08 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:57:08.323272 | orchestrator | 2025-11-01 12:57:08 | INFO  | Task 0d2e46b5-4a7f-4edb-9a5c-9e055709606e is in state STARTED 2025-11-01 12:57:08.325970 | orchestrator | 2025-11-01 12:57:08 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:57:08.325992 | orchestrator | 2025-11-01 12:57:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:57:11.371945 | orchestrator | 2025-11-01 12:57:11 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:57:11.372740 | orchestrator | 2025-11-01 12:57:11 | INFO  | Task 9cd98000-3d72-427a-be60-f41d7f8329cf is in state STARTED 2025-11-01 12:57:11.373911 | orchestrator | 2025-11-01 12:57:11 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:57:11.374763 | orchestrator | 2025-11-01 12:57:11 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:57:11.375953 | orchestrator | 2025-11-01 12:57:11 | INFO  | Task 0d2e46b5-4a7f-4edb-9a5c-9e055709606e is in state STARTED 2025-11-01 12:57:11.376886 | orchestrator | 2025-11-01 12:57:11 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:57:11.376906 | orchestrator | 2025-11-01 12:57:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:57:14.419940 | orchestrator | 2025-11-01 12:57:14 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:57:14.419983 | orchestrator | 2025-11-01 12:57:14 | INFO  | Task 9cd98000-3d72-427a-be60-f41d7f8329cf is in state STARTED 2025-11-01 12:57:14.421629 | orchestrator | 2025-11-01 12:57:14 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:57:14.424237 | orchestrator | 2025-11-01 12:57:14 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:57:14.425069 | orchestrator | 2025-11-01 12:57:14 | INFO  | Task 0d2e46b5-4a7f-4edb-9a5c-9e055709606e is in state STARTED 2025-11-01 12:57:14.426327 | orchestrator | 2025-11-01 12:57:14 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:57:14.426347 | orchestrator | 2025-11-01 12:57:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:57:17.485244 | orchestrator | 2025-11-01 12:57:17 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:57:17.489087 | orchestrator | 2025-11-01 12:57:17 | INFO  | Task 9cd98000-3d72-427a-be60-f41d7f8329cf is in state STARTED 2025-11-01 12:57:17.489978 | orchestrator | 2025-11-01 12:57:17 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:57:17.490652 | orchestrator | 2025-11-01 12:57:17 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:57:17.492916 | orchestrator | 2025-11-01 12:57:17 | INFO  | Task 0d2e46b5-4a7f-4edb-9a5c-9e055709606e is in state STARTED 2025-11-01 12:57:17.493006 | orchestrator | 2025-11-01 12:57:17 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:57:17.493021 | orchestrator | 2025-11-01 12:57:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:57:20.535606 | orchestrator | 2025-11-01 12:57:20 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:57:20.537776 | orchestrator | 2025-11-01 12:57:20 | INFO  | Task 9cd98000-3d72-427a-be60-f41d7f8329cf is in state STARTED 2025-11-01 12:57:20.540068 | orchestrator | 2025-11-01 12:57:20 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:57:20.542913 | orchestrator | 2025-11-01 12:57:20 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:57:20.544437 | orchestrator | 2025-11-01 12:57:20 | INFO  | Task 0d2e46b5-4a7f-4edb-9a5c-9e055709606e is in state STARTED 2025-11-01 12:57:20.546659 | orchestrator | 2025-11-01 12:57:20 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:57:20.546916 | orchestrator | 2025-11-01 12:57:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:57:23.605274 | orchestrator | 2025-11-01 12:57:23 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:57:23.606783 | orchestrator | 2025-11-01 12:57:23 | INFO  | Task 9cd98000-3d72-427a-be60-f41d7f8329cf is in state STARTED 2025-11-01 12:57:23.608685 | orchestrator | 2025-11-01 12:57:23 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:57:23.613408 | orchestrator | 2025-11-01 12:57:23 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:57:23.615150 | orchestrator | 2025-11-01 12:57:23 | INFO  | Task 0d2e46b5-4a7f-4edb-9a5c-9e055709606e is in state STARTED 2025-11-01 12:57:23.618895 | orchestrator | 2025-11-01 12:57:23 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:57:23.618917 | orchestrator | 2025-11-01 12:57:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:57:26.713339 | orchestrator | 2025-11-01 12:57:26 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:57:26.713995 | orchestrator | 2025-11-01 12:57:26 | INFO  | Task 9cd98000-3d72-427a-be60-f41d7f8329cf is in state SUCCESS 2025-11-01 12:57:26.718568 | orchestrator | 2025-11-01 12:57:26 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:57:26.720425 | orchestrator | 2025-11-01 12:57:26 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:57:26.722455 | orchestrator | 2025-11-01 12:57:26 | INFO  | Task 0d2e46b5-4a7f-4edb-9a5c-9e055709606e is in state STARTED 2025-11-01 12:57:26.725255 | orchestrator | 2025-11-01 12:57:26 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:57:26.725288 | orchestrator | 2025-11-01 12:57:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:57:29.832576 | orchestrator | 2025-11-01 12:57:29 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:57:29.832691 | orchestrator | 2025-11-01 12:57:29 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:57:29.832704 | orchestrator | 2025-11-01 12:57:29 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:57:29.832715 | orchestrator | 2025-11-01 12:57:29 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:57:29.832724 | orchestrator | 2025-11-01 12:57:29 | INFO  | Task 0d2e46b5-4a7f-4edb-9a5c-9e055709606e is in state STARTED 2025-11-01 12:57:29.832734 | orchestrator | 2025-11-01 12:57:29 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:57:29.832744 | orchestrator | 2025-11-01 12:57:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:57:32.864828 | orchestrator | 2025-11-01 12:57:32 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:57:32.865477 | orchestrator | 2025-11-01 12:57:32 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:57:32.872561 | orchestrator | 2025-11-01 12:57:32 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:57:32.873420 | orchestrator | 2025-11-01 12:57:32 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:57:32.874314 | orchestrator | 2025-11-01 12:57:32 | INFO  | Task 0d2e46b5-4a7f-4edb-9a5c-9e055709606e is in state STARTED 2025-11-01 12:57:32.879822 | orchestrator | 2025-11-01 12:57:32 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:57:32.879847 | orchestrator | 2025-11-01 12:57:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:57:35.963300 | orchestrator | 2025-11-01 12:57:35 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:57:35.972931 | orchestrator | 2025-11-01 12:57:35 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:57:35.976872 | orchestrator | 2025-11-01 12:57:35 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:57:35.982829 | orchestrator | 2025-11-01 12:57:35 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:57:35.985784 | orchestrator | 2025-11-01 12:57:35 | INFO  | Task 0d2e46b5-4a7f-4edb-9a5c-9e055709606e is in state STARTED 2025-11-01 12:57:35.986872 | orchestrator | 2025-11-01 12:57:35 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:57:35.986895 | orchestrator | 2025-11-01 12:57:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:57:39.152438 | orchestrator | 2025-11-01 12:57:39 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:57:39.152535 | orchestrator | 2025-11-01 12:57:39 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:57:39.158096 | orchestrator | 2025-11-01 12:57:39 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:57:39.158242 | orchestrator | 2025-11-01 12:57:39 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:57:39.158254 | orchestrator | 2025-11-01 12:57:39 | INFO  | Task 0d2e46b5-4a7f-4edb-9a5c-9e055709606e is in state SUCCESS 2025-11-01 12:57:39.158264 | orchestrator | 2025-11-01 12:57:39 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:57:39.158275 | orchestrator | 2025-11-01 12:57:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:57:39.158909 | orchestrator | 2025-11-01 12:57:39.158930 | orchestrator | 2025-11-01 12:57:39.158964 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 12:57:39.158975 | orchestrator | 2025-11-01 12:57:39.158985 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 12:57:39.158995 | orchestrator | Saturday 01 November 2025 12:57:07 +0000 (0:00:00.549) 0:00:00.549 ***** 2025-11-01 12:57:39.159006 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:57:39.159017 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:57:39.159027 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:57:39.159036 | orchestrator | 2025-11-01 12:57:39.159046 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 12:57:39.159056 | orchestrator | Saturday 01 November 2025 12:57:08 +0000 (0:00:00.825) 0:00:01.375 ***** 2025-11-01 12:57:39.159066 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-11-01 12:57:39.159076 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-11-01 12:57:39.159086 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-11-01 12:57:39.159095 | orchestrator | 2025-11-01 12:57:39.159105 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-11-01 12:57:39.159115 | orchestrator | 2025-11-01 12:57:39.159124 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-11-01 12:57:39.159134 | orchestrator | Saturday 01 November 2025 12:57:08 +0000 (0:00:00.668) 0:00:02.043 ***** 2025-11-01 12:57:39.159144 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 12:57:39.159154 | orchestrator | 2025-11-01 12:57:39.159164 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-11-01 12:57:39.159173 | orchestrator | Saturday 01 November 2025 12:57:09 +0000 (0:00:00.852) 0:00:02.895 ***** 2025-11-01 12:57:39.159183 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-11-01 12:57:39.159193 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-11-01 12:57:39.159227 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-11-01 12:57:39.159237 | orchestrator | 2025-11-01 12:57:39.159246 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-11-01 12:57:39.159256 | orchestrator | Saturday 01 November 2025 12:57:10 +0000 (0:00:01.074) 0:00:03.969 ***** 2025-11-01 12:57:39.159266 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-11-01 12:57:39.159275 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-11-01 12:57:39.159285 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-11-01 12:57:39.159295 | orchestrator | 2025-11-01 12:57:39.159304 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-11-01 12:57:39.159314 | orchestrator | Saturday 01 November 2025 12:57:14 +0000 (0:00:03.280) 0:00:07.250 ***** 2025-11-01 12:57:39.159324 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:57:39.159333 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:57:39.159343 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:57:39.159353 | orchestrator | 2025-11-01 12:57:39.159362 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-11-01 12:57:39.159372 | orchestrator | Saturday 01 November 2025 12:57:17 +0000 (0:00:03.220) 0:00:10.470 ***** 2025-11-01 12:57:39.159382 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:57:39.159391 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:57:39.159401 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:57:39.159410 | orchestrator | 2025-11-01 12:57:39.159421 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:57:39.159431 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:57:39.159455 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:57:39.159466 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:57:39.159482 | orchestrator | 2025-11-01 12:57:39.159492 | orchestrator | 2025-11-01 12:57:39.159502 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:57:39.159512 | orchestrator | Saturday 01 November 2025 12:57:24 +0000 (0:00:07.278) 0:00:17.749 ***** 2025-11-01 12:57:39.159522 | orchestrator | =============================================================================== 2025-11-01 12:57:39.159533 | orchestrator | memcached : Restart memcached container --------------------------------- 7.28s 2025-11-01 12:57:39.159544 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.28s 2025-11-01 12:57:39.159555 | orchestrator | memcached : Check memcached container ----------------------------------- 3.22s 2025-11-01 12:57:39.159566 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.07s 2025-11-01 12:57:39.159576 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.85s 2025-11-01 12:57:39.159587 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.83s 2025-11-01 12:57:39.159598 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.67s 2025-11-01 12:57:39.159608 | orchestrator | 2025-11-01 12:57:39.159619 | orchestrator | 2025-11-01 12:57:39.159630 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 12:57:39.159641 | orchestrator | 2025-11-01 12:57:39.159652 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 12:57:39.159663 | orchestrator | Saturday 01 November 2025 12:57:08 +0000 (0:00:00.645) 0:00:00.645 ***** 2025-11-01 12:57:39.159673 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:57:39.159684 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:57:39.159695 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:57:39.159705 | orchestrator | 2025-11-01 12:57:39.159716 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 12:57:39.159737 | orchestrator | Saturday 01 November 2025 12:57:09 +0000 (0:00:00.515) 0:00:01.160 ***** 2025-11-01 12:57:39.159748 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-11-01 12:57:39.159758 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-11-01 12:57:39.159770 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-11-01 12:57:39.159781 | orchestrator | 2025-11-01 12:57:39.159791 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-11-01 12:57:39.159802 | orchestrator | 2025-11-01 12:57:39.159813 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-11-01 12:57:39.159823 | orchestrator | Saturday 01 November 2025 12:57:10 +0000 (0:00:00.809) 0:00:01.970 ***** 2025-11-01 12:57:39.159834 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 12:57:39.159846 | orchestrator | 2025-11-01 12:57:39.159856 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-11-01 12:57:39.159868 | orchestrator | Saturday 01 November 2025 12:57:11 +0000 (0:00:01.048) 0:00:03.018 ***** 2025-11-01 12:57:39.159881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.159896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.159913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.159928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.159939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.159956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.159966 | orchestrator | 2025-11-01 12:57:39.159976 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-11-01 12:57:39.159986 | orchestrator | Saturday 01 November 2025 12:57:13 +0000 (0:00:01.879) 0:00:04.898 ***** 2025-11-01 12:57:39.159996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.160006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.160022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.160036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.160047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.160062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.160073 | orchestrator | 2025-11-01 12:57:39.160083 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-11-01 12:57:39.160092 | orchestrator | Saturday 01 November 2025 12:57:17 +0000 (0:00:04.489) 0:00:09.388 ***** 2025-11-01 12:57:39.160102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.160112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.160128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.160139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.160149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.160159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.160169 | orchestrator | 2025-11-01 12:57:39.160183 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-11-01 12:57:39.160193 | orchestrator | Saturday 01 November 2025 12:57:20 +0000 (0:00:03.408) 0:00:12.796 ***** 2025-11-01 12:57:39.160219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.160236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.160252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.160262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.160277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.160288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-11-01 12:57:39.160298 | orchestrator | 2025-11-01 12:57:39.160307 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-11-01 12:57:39.160317 | orchestrator | Saturday 01 November 2025 12:57:23 +0000 (0:00:02.797) 0:00:15.594 ***** 2025-11-01 12:57:39.160327 | orchestrator | 2025-11-01 12:57:39.160336 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-11-01 12:57:39.160352 | orchestrator | Saturday 01 November 2025 12:57:23 +0000 (0:00:00.166) 0:00:15.760 ***** 2025-11-01 12:57:39.160362 | orchestrator | 2025-11-01 12:57:39.160371 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-11-01 12:57:39.160381 | orchestrator | Saturday 01 November 2025 12:57:24 +0000 (0:00:00.150) 0:00:15.911 ***** 2025-11-01 12:57:39.160390 | orchestrator | 2025-11-01 12:57:39.160403 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-11-01 12:57:39.160413 | orchestrator | Saturday 01 November 2025 12:57:24 +0000 (0:00:00.088) 0:00:16.000 ***** 2025-11-01 12:57:39.160429 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:57:39.160439 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:57:39.160448 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:57:39.160458 | orchestrator | 2025-11-01 12:57:39.160467 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-11-01 12:57:39.160477 | orchestrator | Saturday 01 November 2025 12:57:29 +0000 (0:00:05.117) 0:00:21.117 ***** 2025-11-01 12:57:39.160486 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:57:39.160496 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:57:39.160505 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:57:39.160515 | orchestrator | 2025-11-01 12:57:39.160524 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:57:39.160534 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:57:39.160544 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:57:39.160554 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:57:39.160563 | orchestrator | 2025-11-01 12:57:39.160573 | orchestrator | 2025-11-01 12:57:39.160582 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:57:39.160592 | orchestrator | Saturday 01 November 2025 12:57:36 +0000 (0:00:07.139) 0:00:28.257 ***** 2025-11-01 12:57:39.160601 | orchestrator | =============================================================================== 2025-11-01 12:57:39.160611 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.14s 2025-11-01 12:57:39.160620 | orchestrator | redis : Restart redis container ----------------------------------------- 5.12s 2025-11-01 12:57:39.160630 | orchestrator | redis : Copying over default config.json files -------------------------- 4.49s 2025-11-01 12:57:39.160639 | orchestrator | redis : Copying over redis config files --------------------------------- 3.41s 2025-11-01 12:57:39.160648 | orchestrator | redis : Check redis containers ------------------------------------------ 2.80s 2025-11-01 12:57:39.160658 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.88s 2025-11-01 12:57:39.160667 | orchestrator | redis : include_tasks --------------------------------------------------- 1.05s 2025-11-01 12:57:39.160677 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2025-11-01 12:57:39.160686 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.52s 2025-11-01 12:57:39.160696 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.41s 2025-11-01 12:57:42.200098 | orchestrator | 2025-11-01 12:57:42 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:57:42.202572 | orchestrator | 2025-11-01 12:57:42 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:57:42.205850 | orchestrator | 2025-11-01 12:57:42 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:57:42.206333 | orchestrator | 2025-11-01 12:57:42 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:57:42.208974 | orchestrator | 2025-11-01 12:57:42 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:57:42.209063 | orchestrator | 2025-11-01 12:57:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:57:45.246187 | orchestrator | 2025-11-01 12:57:45 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:57:45.246409 | orchestrator | 2025-11-01 12:57:45 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:57:45.246437 | orchestrator | 2025-11-01 12:57:45 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:57:45.247920 | orchestrator | 2025-11-01 12:57:45 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:57:45.251103 | orchestrator | 2025-11-01 12:57:45 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:57:45.251124 | orchestrator | 2025-11-01 12:57:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:57:48.325120 | orchestrator | 2025-11-01 12:57:48 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:57:48.325761 | orchestrator | 2025-11-01 12:57:48 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:57:48.327431 | orchestrator | 2025-11-01 12:57:48 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:57:48.331490 | orchestrator | 2025-11-01 12:57:48 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:57:48.331514 | orchestrator | 2025-11-01 12:57:48 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:57:48.331526 | orchestrator | 2025-11-01 12:57:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:57:51.376154 | orchestrator | 2025-11-01 12:57:51 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:57:51.377959 | orchestrator | 2025-11-01 12:57:51 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:57:51.378400 | orchestrator | 2025-11-01 12:57:51 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:57:51.379225 | orchestrator | 2025-11-01 12:57:51 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:57:51.380049 | orchestrator | 2025-11-01 12:57:51 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:57:51.380063 | orchestrator | 2025-11-01 12:57:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:57:54.436144 | orchestrator | 2025-11-01 12:57:54 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:57:54.437838 | orchestrator | 2025-11-01 12:57:54 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:57:54.438702 | orchestrator | 2025-11-01 12:57:54 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:57:54.440428 | orchestrator | 2025-11-01 12:57:54 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:57:54.441559 | orchestrator | 2025-11-01 12:57:54 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:57:54.441582 | orchestrator | 2025-11-01 12:57:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:57:57.492895 | orchestrator | 2025-11-01 12:57:57 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:57:57.495320 | orchestrator | 2025-11-01 12:57:57 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:57:57.497237 | orchestrator | 2025-11-01 12:57:57 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:57:57.499248 | orchestrator | 2025-11-01 12:57:57 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:57:57.501694 | orchestrator | 2025-11-01 12:57:57 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:57:57.501717 | orchestrator | 2025-11-01 12:57:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:58:00.639362 | orchestrator | 2025-11-01 12:58:00 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:58:00.639423 | orchestrator | 2025-11-01 12:58:00 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:58:00.639459 | orchestrator | 2025-11-01 12:58:00 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:58:00.639471 | orchestrator | 2025-11-01 12:58:00 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:58:00.639481 | orchestrator | 2025-11-01 12:58:00 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:58:00.639493 | orchestrator | 2025-11-01 12:58:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:58:03.654115 | orchestrator | 2025-11-01 12:58:03 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:58:03.655811 | orchestrator | 2025-11-01 12:58:03 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:58:03.659848 | orchestrator | 2025-11-01 12:58:03 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:58:03.661847 | orchestrator | 2025-11-01 12:58:03 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:58:03.667959 | orchestrator | 2025-11-01 12:58:03 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:58:03.667991 | orchestrator | 2025-11-01 12:58:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:58:06.754961 | orchestrator | 2025-11-01 12:58:06 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:58:06.756423 | orchestrator | 2025-11-01 12:58:06 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:58:06.758179 | orchestrator | 2025-11-01 12:58:06 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:58:06.759689 | orchestrator | 2025-11-01 12:58:06 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:58:06.766361 | orchestrator | 2025-11-01 12:58:06 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:58:06.768185 | orchestrator | 2025-11-01 12:58:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:58:09.807117 | orchestrator | 2025-11-01 12:58:09 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:58:09.807835 | orchestrator | 2025-11-01 12:58:09 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:58:09.810576 | orchestrator | 2025-11-01 12:58:09 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:58:09.811486 | orchestrator | 2025-11-01 12:58:09 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:58:09.812460 | orchestrator | 2025-11-01 12:58:09 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:58:09.812828 | orchestrator | 2025-11-01 12:58:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:58:12.857052 | orchestrator | 2025-11-01 12:58:12 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:58:12.858605 | orchestrator | 2025-11-01 12:58:12 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:58:12.860250 | orchestrator | 2025-11-01 12:58:12 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:58:12.861713 | orchestrator | 2025-11-01 12:58:12 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:58:12.863151 | orchestrator | 2025-11-01 12:58:12 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:58:12.863963 | orchestrator | 2025-11-01 12:58:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:58:15.898622 | orchestrator | 2025-11-01 12:58:15 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:58:15.899240 | orchestrator | 2025-11-01 12:58:15 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:58:15.900766 | orchestrator | 2025-11-01 12:58:15 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:58:15.903435 | orchestrator | 2025-11-01 12:58:15 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:58:15.905817 | orchestrator | 2025-11-01 12:58:15 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:58:15.905834 | orchestrator | 2025-11-01 12:58:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:58:18.955326 | orchestrator | 2025-11-01 12:58:18 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:58:18.957605 | orchestrator | 2025-11-01 12:58:18 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:58:18.959817 | orchestrator | 2025-11-01 12:58:18 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:58:18.959844 | orchestrator | 2025-11-01 12:58:18 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:58:18.959855 | orchestrator | 2025-11-01 12:58:18 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:58:18.959866 | orchestrator | 2025-11-01 12:58:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:58:22.042587 | orchestrator | 2025-11-01 12:58:22 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:58:22.043384 | orchestrator | 2025-11-01 12:58:22 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:58:22.047043 | orchestrator | 2025-11-01 12:58:22 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:58:22.048189 | orchestrator | 2025-11-01 12:58:22 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:58:22.049277 | orchestrator | 2025-11-01 12:58:22 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:58:22.049299 | orchestrator | 2025-11-01 12:58:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:58:25.092453 | orchestrator | 2025-11-01 12:58:25 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:58:25.100262 | orchestrator | 2025-11-01 12:58:25 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:58:25.102005 | orchestrator | 2025-11-01 12:58:25 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:58:25.106328 | orchestrator | 2025-11-01 12:58:25 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:58:25.109253 | orchestrator | 2025-11-01 12:58:25 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:58:25.109423 | orchestrator | 2025-11-01 12:58:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:58:28.158585 | orchestrator | 2025-11-01 12:58:28 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:58:28.159285 | orchestrator | 2025-11-01 12:58:28 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:58:28.164707 | orchestrator | 2025-11-01 12:58:28 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:58:28.168634 | orchestrator | 2025-11-01 12:58:28 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state STARTED 2025-11-01 12:58:28.169280 | orchestrator | 2025-11-01 12:58:28 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:58:28.169302 | orchestrator | 2025-11-01 12:58:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:58:31.416115 | orchestrator | 2025-11-01 12:58:31 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:58:31.417171 | orchestrator | 2025-11-01 12:58:31 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:58:31.419297 | orchestrator | 2025-11-01 12:58:31 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:58:31.420257 | orchestrator | 2025-11-01 12:58:31 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:58:31.421465 | orchestrator | 2025-11-01 12:58:31 | INFO  | Task 0d8c8d9e-39f0-4737-8bf3-1ac4fe02144d is in state SUCCESS 2025-11-01 12:58:31.423032 | orchestrator | 2025-11-01 12:58:31.423067 | orchestrator | 2025-11-01 12:58:31.423079 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 12:58:31.423091 | orchestrator | 2025-11-01 12:58:31.423103 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 12:58:31.423114 | orchestrator | Saturday 01 November 2025 12:57:08 +0000 (0:00:00.433) 0:00:00.433 ***** 2025-11-01 12:58:31.423125 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:58:31.423138 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:58:31.423149 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:58:31.423159 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:58:31.423170 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:58:31.423181 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:58:31.423192 | orchestrator | 2025-11-01 12:58:31.423228 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 12:58:31.423239 | orchestrator | Saturday 01 November 2025 12:57:09 +0000 (0:00:01.415) 0:00:01.849 ***** 2025-11-01 12:58:31.423251 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-01 12:58:31.423275 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-01 12:58:31.423287 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-01 12:58:31.423297 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-01 12:58:31.423308 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-01 12:58:31.423319 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-11-01 12:58:31.423330 | orchestrator | 2025-11-01 12:58:31.423341 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-11-01 12:58:31.423352 | orchestrator | 2025-11-01 12:58:31.423363 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-11-01 12:58:31.423374 | orchestrator | Saturday 01 November 2025 12:57:10 +0000 (0:00:01.191) 0:00:03.041 ***** 2025-11-01 12:58:31.423386 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 12:58:31.423399 | orchestrator | 2025-11-01 12:58:31.423410 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-11-01 12:58:31.423421 | orchestrator | Saturday 01 November 2025 12:57:12 +0000 (0:00:02.248) 0:00:05.289 ***** 2025-11-01 12:58:31.423432 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-11-01 12:58:31.423443 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-11-01 12:58:31.423454 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-11-01 12:58:31.423465 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-11-01 12:58:31.423553 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-11-01 12:58:31.423590 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-11-01 12:58:31.423602 | orchestrator | 2025-11-01 12:58:31.423614 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-11-01 12:58:31.423625 | orchestrator | Saturday 01 November 2025 12:57:15 +0000 (0:00:02.768) 0:00:08.058 ***** 2025-11-01 12:58:31.423636 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-11-01 12:58:31.423647 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-11-01 12:58:31.423658 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-11-01 12:58:31.423668 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-11-01 12:58:31.423679 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-11-01 12:58:31.423690 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-11-01 12:58:31.423700 | orchestrator | 2025-11-01 12:58:31.423711 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-11-01 12:58:31.423722 | orchestrator | Saturday 01 November 2025 12:57:18 +0000 (0:00:02.568) 0:00:10.626 ***** 2025-11-01 12:58:31.423733 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-11-01 12:58:31.423743 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:58:31.423755 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-11-01 12:58:31.423766 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:58:31.423777 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-11-01 12:58:31.423787 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:58:31.423798 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-11-01 12:58:31.423809 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:31.423820 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-11-01 12:58:31.423830 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:31.423841 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-11-01 12:58:31.423852 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:31.423863 | orchestrator | 2025-11-01 12:58:31.423873 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-11-01 12:58:31.423884 | orchestrator | Saturday 01 November 2025 12:57:19 +0000 (0:00:01.527) 0:00:12.153 ***** 2025-11-01 12:58:31.423895 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:58:31.423906 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:58:31.423917 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:58:31.423927 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:31.423938 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:31.423949 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:31.423960 | orchestrator | 2025-11-01 12:58:31.423971 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-11-01 12:58:31.423982 | orchestrator | Saturday 01 November 2025 12:57:20 +0000 (0:00:01.012) 0:00:13.166 ***** 2025-11-01 12:58:31.424011 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424033 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424055 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424068 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424091 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424145 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424224 | orchestrator | 2025-11-01 12:58:31.424237 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-11-01 12:58:31.424250 | orchestrator | Saturday 01 November 2025 12:57:23 +0000 (0:00:02.259) 0:00:15.426 ***** 2025-11-01 12:58:31.424278 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424300 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424313 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424339 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424394 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424408 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424489 | orchestrator | 2025-11-01 12:58:31.424502 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-11-01 12:58:31.424515 | orchestrator | Saturday 01 November 2025 12:57:28 +0000 (0:00:05.078) 0:00:20.505 ***** 2025-11-01 12:58:31.424528 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:58:31.424540 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:58:31.424551 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:58:31.424566 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:31.424578 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:31.424588 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:31.424599 | orchestrator | 2025-11-01 12:58:31.424610 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-11-01 12:58:31.424621 | orchestrator | Saturday 01 November 2025 12:57:30 +0000 (0:00:02.711) 0:00:23.216 ***** 2025-11-01 12:58:31.424632 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424644 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424655 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424667 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424692 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424745 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-11-01 12:58:31.424809 | orchestrator | 2025-11-01 12:58:31.424820 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-01 12:58:31.424831 | orchestrator | Saturday 01 November 2025 12:57:36 +0000 (0:00:05.405) 0:00:28.622 ***** 2025-11-01 12:58:31.424842 | orchestrator | 2025-11-01 12:58:31.424853 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-01 12:58:31.424864 | orchestrator | Saturday 01 November 2025 12:57:36 +0000 (0:00:00.315) 0:00:28.937 ***** 2025-11-01 12:58:31.424875 | orchestrator | 2025-11-01 12:58:31.424885 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-01 12:58:31.424896 | orchestrator | Saturday 01 November 2025 12:57:36 +0000 (0:00:00.296) 0:00:29.233 ***** 2025-11-01 12:58:31.424907 | orchestrator | 2025-11-01 12:58:31.424918 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-01 12:58:31.424928 | orchestrator | Saturday 01 November 2025 12:57:37 +0000 (0:00:00.300) 0:00:29.533 ***** 2025-11-01 12:58:31.424939 | orchestrator | 2025-11-01 12:58:31.424950 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-01 12:58:31.424960 | orchestrator | Saturday 01 November 2025 12:57:37 +0000 (0:00:00.617) 0:00:30.151 ***** 2025-11-01 12:58:31.424971 | orchestrator | 2025-11-01 12:58:31.424982 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-11-01 12:58:31.424992 | orchestrator | Saturday 01 November 2025 12:57:38 +0000 (0:00:00.541) 0:00:30.692 ***** 2025-11-01 12:58:31.425003 | orchestrator | 2025-11-01 12:58:31.425014 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-11-01 12:58:31.425025 | orchestrator | Saturday 01 November 2025 12:57:38 +0000 (0:00:00.508) 0:00:31.200 ***** 2025-11-01 12:58:31.425035 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:31.425046 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:58:31.425057 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:58:31.425068 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:58:31.425079 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:58:31.425089 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:58:31.425100 | orchestrator | 2025-11-01 12:58:31.425111 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-11-01 12:58:31.425121 | orchestrator | Saturday 01 November 2025 12:57:44 +0000 (0:00:05.327) 0:00:36.528 ***** 2025-11-01 12:58:31.425141 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:58:31.425153 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:58:31.425163 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:58:31.425174 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:58:31.425185 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:58:31.425211 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:58:31.425222 | orchestrator | 2025-11-01 12:58:31.425233 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-11-01 12:58:31.425244 | orchestrator | Saturday 01 November 2025 12:57:46 +0000 (0:00:02.707) 0:00:39.236 ***** 2025-11-01 12:58:31.425255 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:58:31.425265 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:31.425276 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:58:31.425287 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:58:31.425298 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:58:31.425308 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:58:31.425319 | orchestrator | 2025-11-01 12:58:31.425329 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-11-01 12:58:31.425340 | orchestrator | Saturday 01 November 2025 12:57:58 +0000 (0:00:12.007) 0:00:51.244 ***** 2025-11-01 12:58:31.425351 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-11-01 12:58:31.425362 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-11-01 12:58:31.425373 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-11-01 12:58:31.425384 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-11-01 12:58:31.425395 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-11-01 12:58:31.425411 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-11-01 12:58:31.425423 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-11-01 12:58:31.425434 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-11-01 12:58:31.425444 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-11-01 12:58:31.425455 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-11-01 12:58:31.425465 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-11-01 12:58:31.425481 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-11-01 12:58:31.425492 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-01 12:58:31.425503 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-01 12:58:31.425513 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-01 12:58:31.425524 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-01 12:58:31.425535 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-01 12:58:31.425545 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-11-01 12:58:31.425556 | orchestrator | 2025-11-01 12:58:31.425567 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-11-01 12:58:31.425585 | orchestrator | Saturday 01 November 2025 12:58:10 +0000 (0:00:11.258) 0:01:02.503 ***** 2025-11-01 12:58:31.425596 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-11-01 12:58:31.425606 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:58:31.425617 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-11-01 12:58:31.425628 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:58:31.425639 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-11-01 12:58:31.425650 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:58:31.425660 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-11-01 12:58:31.425671 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-11-01 12:58:31.425682 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-11-01 12:58:31.425693 | orchestrator | 2025-11-01 12:58:31.425704 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-11-01 12:58:31.425714 | orchestrator | Saturday 01 November 2025 12:58:13 +0000 (0:00:03.552) 0:01:06.056 ***** 2025-11-01 12:58:31.425725 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-11-01 12:58:31.425736 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:58:31.425747 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-11-01 12:58:31.425757 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:58:31.425768 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-11-01 12:58:31.425779 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:58:31.425790 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-11-01 12:58:31.425801 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-11-01 12:58:31.425811 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-11-01 12:58:31.425822 | orchestrator | 2025-11-01 12:58:31.425833 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-11-01 12:58:31.425844 | orchestrator | Saturday 01 November 2025 12:58:18 +0000 (0:00:04.285) 0:01:10.341 ***** 2025-11-01 12:58:31.425854 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:58:31.425865 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:58:31.425876 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:58:31.425886 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:58:31.425897 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:58:31.425908 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:31.425919 | orchestrator | 2025-11-01 12:58:31.425929 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:58:31.425940 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-01 12:58:31.425952 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-01 12:58:31.425963 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-01 12:58:31.425974 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 12:58:31.425984 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 12:58:31.426000 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 12:58:31.426012 | orchestrator | 2025-11-01 12:58:31.426075 | orchestrator | 2025-11-01 12:58:31.426086 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:58:31.426097 | orchestrator | Saturday 01 November 2025 12:58:27 +0000 (0:00:09.825) 0:01:20.167 ***** 2025-11-01 12:58:31.426115 | orchestrator | =============================================================================== 2025-11-01 12:58:31.426126 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 21.83s 2025-11-01 12:58:31.426136 | orchestrator | openvswitch : Set system-id, hostname and hw-offload ------------------- 11.26s 2025-11-01 12:58:31.426147 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 5.41s 2025-11-01 12:58:31.426158 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 5.33s 2025-11-01 12:58:31.426174 | orchestrator | openvswitch : Copying over config.json files for services --------------- 5.08s 2025-11-01 12:58:31.426185 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.29s 2025-11-01 12:58:31.426223 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.55s 2025-11-01 12:58:31.426235 | orchestrator | module-load : Load modules ---------------------------------------------- 2.77s 2025-11-01 12:58:31.426246 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.71s 2025-11-01 12:58:31.426256 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.71s 2025-11-01 12:58:31.426267 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.58s 2025-11-01 12:58:31.426278 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.57s 2025-11-01 12:58:31.426288 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.26s 2025-11-01 12:58:31.426299 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.25s 2025-11-01 12:58:31.426310 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.53s 2025-11-01 12:58:31.426321 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.42s 2025-11-01 12:58:31.426331 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.19s 2025-11-01 12:58:31.426342 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.01s 2025-11-01 12:58:31.426353 | orchestrator | 2025-11-01 12:58:31 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:58:31.426364 | orchestrator | 2025-11-01 12:58:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:58:34.459172 | orchestrator | 2025-11-01 12:58:34 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:58:34.460379 | orchestrator | 2025-11-01 12:58:34 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:58:34.463843 | orchestrator | 2025-11-01 12:58:34 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:58:34.467679 | orchestrator | 2025-11-01 12:58:34 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:58:34.471291 | orchestrator | 2025-11-01 12:58:34 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:58:34.471444 | orchestrator | 2025-11-01 12:58:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:58:37.506688 | orchestrator | 2025-11-01 12:58:37 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:58:37.507886 | orchestrator | 2025-11-01 12:58:37 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:58:37.510091 | orchestrator | 2025-11-01 12:58:37 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:58:37.510886 | orchestrator | 2025-11-01 12:58:37 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:58:37.511342 | orchestrator | 2025-11-01 12:58:37 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:58:37.511367 | orchestrator | 2025-11-01 12:58:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:58:40.557580 | orchestrator | 2025-11-01 12:58:40 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:58:40.559128 | orchestrator | 2025-11-01 12:58:40 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:58:40.560066 | orchestrator | 2025-11-01 12:58:40 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:58:40.561287 | orchestrator | 2025-11-01 12:58:40 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:58:40.562381 | orchestrator | 2025-11-01 12:58:40 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:58:40.562405 | orchestrator | 2025-11-01 12:58:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:58:43.597638 | orchestrator | 2025-11-01 12:58:43 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:58:43.599101 | orchestrator | 2025-11-01 12:58:43 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:58:43.600182 | orchestrator | 2025-11-01 12:58:43 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:58:43.601449 | orchestrator | 2025-11-01 12:58:43 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:58:43.603481 | orchestrator | 2025-11-01 12:58:43 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:58:43.603526 | orchestrator | 2025-11-01 12:58:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:58:46.665571 | orchestrator | 2025-11-01 12:58:46 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:58:46.665838 | orchestrator | 2025-11-01 12:58:46 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:58:46.667352 | orchestrator | 2025-11-01 12:58:46 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:58:46.671943 | orchestrator | 2025-11-01 12:58:46 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:58:46.673858 | orchestrator | 2025-11-01 12:58:46 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:58:46.673879 | orchestrator | 2025-11-01 12:58:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:58:49.709744 | orchestrator | 2025-11-01 12:58:49 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:58:49.710178 | orchestrator | 2025-11-01 12:58:49 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:58:49.711468 | orchestrator | 2025-11-01 12:58:49 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:58:49.712939 | orchestrator | 2025-11-01 12:58:49 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:58:49.712961 | orchestrator | 2025-11-01 12:58:49 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:58:49.712973 | orchestrator | 2025-11-01 12:58:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:58:52.760904 | orchestrator | 2025-11-01 12:58:52 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:58:52.760978 | orchestrator | 2025-11-01 12:58:52 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:58:52.761263 | orchestrator | 2025-11-01 12:58:52 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:58:52.762111 | orchestrator | 2025-11-01 12:58:52 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:58:52.762966 | orchestrator | 2025-11-01 12:58:52 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state STARTED 2025-11-01 12:58:52.763015 | orchestrator | 2025-11-01 12:58:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:58:55.822604 | orchestrator | 2025-11-01 12:58:55 | INFO  | Task b641f72d-3464-487c-bcad-403615981703 is in state STARTED 2025-11-01 12:58:55.827712 | orchestrator | 2025-11-01 12:58:55 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:58:55.832242 | orchestrator | 2025-11-01 12:58:55 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:58:55.836130 | orchestrator | 2025-11-01 12:58:55 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:58:55.837394 | orchestrator | 2025-11-01 12:58:55 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:58:55.840856 | orchestrator | 2025-11-01 12:58:55 | INFO  | Task 2a0c5738-cd18-476f-b25b-b167d0b45a41 is in state STARTED 2025-11-01 12:58:55.843314 | orchestrator | 2025-11-01 12:58:55 | INFO  | Task 0bd01fb3-2b05-4823-bf93-7af5a73eb54d is in state SUCCESS 2025-11-01 12:58:55.843336 | orchestrator | 2025-11-01 12:58:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:58:55.845008 | orchestrator | 2025-11-01 12:58:55.845044 | orchestrator | 2025-11-01 12:58:55.845057 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-11-01 12:58:55.845068 | orchestrator | 2025-11-01 12:58:55.845079 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-11-01 12:58:55.845091 | orchestrator | Saturday 01 November 2025 12:54:03 +0000 (0:00:00.247) 0:00:00.247 ***** 2025-11-01 12:58:55.845102 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:58:55.845114 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:58:55.845125 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:58:55.845136 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:58:55.845147 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:58:55.845157 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:58:55.845168 | orchestrator | 2025-11-01 12:58:55.845179 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-11-01 12:58:55.845190 | orchestrator | Saturday 01 November 2025 12:54:04 +0000 (0:00:00.983) 0:00:01.230 ***** 2025-11-01 12:58:55.845224 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:58:55.845237 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:58:55.845249 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:58:55.845260 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.845271 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.845282 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.845293 | orchestrator | 2025-11-01 12:58:55.845304 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-11-01 12:58:55.845315 | orchestrator | Saturday 01 November 2025 12:54:05 +0000 (0:00:01.118) 0:00:02.348 ***** 2025-11-01 12:58:55.845327 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:58:55.845355 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:58:55.845366 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:58:55.845377 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.845388 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.845399 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.845409 | orchestrator | 2025-11-01 12:58:55.845420 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-11-01 12:58:55.845431 | orchestrator | Saturday 01 November 2025 12:54:07 +0000 (0:00:01.186) 0:00:03.534 ***** 2025-11-01 12:58:55.845442 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:58:55.845453 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:58:55.845464 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:55.845475 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:58:55.845486 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:58:55.845517 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:58:55.845528 | orchestrator | 2025-11-01 12:58:55.845539 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-11-01 12:58:55.845550 | orchestrator | Saturday 01 November 2025 12:54:09 +0000 (0:00:02.297) 0:00:05.832 ***** 2025-11-01 12:58:55.845561 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:58:55.845571 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:58:55.845582 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:58:55.845593 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:55.845605 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:58:55.845617 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:58:55.845629 | orchestrator | 2025-11-01 12:58:55.845641 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-11-01 12:58:55.845653 | orchestrator | Saturday 01 November 2025 12:54:11 +0000 (0:00:01.972) 0:00:07.804 ***** 2025-11-01 12:58:55.845666 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:58:55.845678 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:58:55.845690 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:58:55.845702 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:55.845715 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:58:55.845727 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:58:55.845739 | orchestrator | 2025-11-01 12:58:55.845751 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-11-01 12:58:55.845764 | orchestrator | Saturday 01 November 2025 12:54:12 +0000 (0:00:01.506) 0:00:09.311 ***** 2025-11-01 12:58:55.845776 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:58:55.845788 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:58:55.845800 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:58:55.845813 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.845825 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.845837 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.845849 | orchestrator | 2025-11-01 12:58:55.845861 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-11-01 12:58:55.845874 | orchestrator | Saturday 01 November 2025 12:54:13 +0000 (0:00:00.831) 0:00:10.142 ***** 2025-11-01 12:58:55.845886 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:58:55.845898 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:58:55.845911 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:58:55.845923 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.845934 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.845946 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.845959 | orchestrator | 2025-11-01 12:58:55.845970 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-11-01 12:58:55.845980 | orchestrator | Saturday 01 November 2025 12:54:15 +0000 (0:00:01.431) 0:00:11.574 ***** 2025-11-01 12:58:55.845991 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-01 12:58:55.846002 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-01 12:58:55.846013 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:58:55.846075 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-01 12:58:55.846086 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-01 12:58:55.846097 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:58:55.846108 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-01 12:58:55.846118 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-01 12:58:55.846129 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:58:55.846140 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-01 12:58:55.846163 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-01 12:58:55.846174 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.846185 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-01 12:58:55.846220 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-01 12:58:55.846232 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.846243 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-01 12:58:55.846254 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-01 12:58:55.846264 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.846275 | orchestrator | 2025-11-01 12:58:55.846286 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-11-01 12:58:55.846297 | orchestrator | Saturday 01 November 2025 12:54:16 +0000 (0:00:01.131) 0:00:12.706 ***** 2025-11-01 12:58:55.846308 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:58:55.846319 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:58:55.846329 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:58:55.846340 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.846351 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.846362 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.846373 | orchestrator | 2025-11-01 12:58:55.846383 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-11-01 12:58:55.846396 | orchestrator | Saturday 01 November 2025 12:54:18 +0000 (0:00:02.172) 0:00:14.879 ***** 2025-11-01 12:58:55.846413 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:58:55.846424 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:58:55.846435 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:58:55.846445 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:58:55.846456 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:58:55.846467 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:58:55.846478 | orchestrator | 2025-11-01 12:58:55.846489 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-11-01 12:58:55.846499 | orchestrator | Saturday 01 November 2025 12:54:19 +0000 (0:00:01.400) 0:00:16.279 ***** 2025-11-01 12:58:55.846510 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:58:55.846521 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:58:55.846532 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:58:55.846543 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:55.846554 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:58:55.846564 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:58:55.846575 | orchestrator | 2025-11-01 12:58:55.846586 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-11-01 12:58:55.846596 | orchestrator | Saturday 01 November 2025 12:54:25 +0000 (0:00:06.089) 0:00:22.369 ***** 2025-11-01 12:58:55.846607 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:58:55.846618 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:58:55.846628 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:58:55.846639 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.846650 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.846660 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.846671 | orchestrator | 2025-11-01 12:58:55.846682 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-11-01 12:58:55.846693 | orchestrator | Saturday 01 November 2025 12:54:27 +0000 (0:00:01.724) 0:00:24.094 ***** 2025-11-01 12:58:55.846703 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:58:55.846714 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:58:55.846725 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:58:55.846735 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.846746 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.846757 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.846767 | orchestrator | 2025-11-01 12:58:55.846778 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-11-01 12:58:55.846791 | orchestrator | Saturday 01 November 2025 12:54:29 +0000 (0:00:02.206) 0:00:26.300 ***** 2025-11-01 12:58:55.846809 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:58:55.846820 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:58:55.846830 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:58:55.846841 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.846852 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.846862 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.846873 | orchestrator | 2025-11-01 12:58:55.846884 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-11-01 12:58:55.846895 | orchestrator | Saturday 01 November 2025 12:54:30 +0000 (0:00:00.795) 0:00:27.096 ***** 2025-11-01 12:58:55.846906 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-11-01 12:58:55.846917 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-11-01 12:58:55.846928 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-11-01 12:58:55.846938 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-11-01 12:58:55.846949 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:58:55.846960 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-11-01 12:58:55.846971 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-11-01 12:58:55.846981 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:58:55.846992 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-11-01 12:58:55.847003 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-11-01 12:58:55.847014 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:58:55.847025 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-11-01 12:58:55.847035 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-11-01 12:58:55.847046 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.847057 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.847067 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-11-01 12:58:55.847078 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-11-01 12:58:55.847089 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.847100 | orchestrator | 2025-11-01 12:58:55.847111 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-11-01 12:58:55.847128 | orchestrator | Saturday 01 November 2025 12:54:33 +0000 (0:00:02.716) 0:00:29.813 ***** 2025-11-01 12:58:55.847139 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:58:55.847150 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:58:55.847161 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:58:55.847171 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.847182 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.847193 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.847220 | orchestrator | 2025-11-01 12:58:55.847231 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2025-11-01 12:58:55.847242 | orchestrator | Saturday 01 November 2025 12:54:35 +0000 (0:00:02.599) 0:00:32.412 ***** 2025-11-01 12:58:55.847253 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:58:55.847264 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:58:55.847274 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:58:55.847285 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.847296 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.847307 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.847318 | orchestrator | 2025-11-01 12:58:55.847328 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-11-01 12:58:55.847339 | orchestrator | 2025-11-01 12:58:55.847350 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-11-01 12:58:55.847361 | orchestrator | Saturday 01 November 2025 12:54:39 +0000 (0:00:04.023) 0:00:36.436 ***** 2025-11-01 12:58:55.847371 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:58:55.847382 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:58:55.847393 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:58:55.847404 | orchestrator | 2025-11-01 12:58:55.847426 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-11-01 12:58:55.847437 | orchestrator | Saturday 01 November 2025 12:54:44 +0000 (0:00:04.986) 0:00:41.423 ***** 2025-11-01 12:58:55.847448 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:58:55.847459 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:58:55.847470 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:58:55.847480 | orchestrator | 2025-11-01 12:58:55.847491 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-11-01 12:58:55.847502 | orchestrator | Saturday 01 November 2025 12:54:47 +0000 (0:00:02.219) 0:00:43.643 ***** 2025-11-01 12:58:55.847512 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:58:55.847523 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:58:55.847534 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:58:55.847545 | orchestrator | 2025-11-01 12:58:55.847555 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-11-01 12:58:55.847566 | orchestrator | Saturday 01 November 2025 12:54:48 +0000 (0:00:01.419) 0:00:45.063 ***** 2025-11-01 12:58:55.847577 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:58:55.847588 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:58:55.847599 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:58:55.847609 | orchestrator | 2025-11-01 12:58:55.847620 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-11-01 12:58:55.847631 | orchestrator | Saturday 01 November 2025 12:54:49 +0000 (0:00:01.238) 0:00:46.301 ***** 2025-11-01 12:58:55.847642 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.847653 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.847664 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.847674 | orchestrator | 2025-11-01 12:58:55.847685 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-11-01 12:58:55.847696 | orchestrator | Saturday 01 November 2025 12:54:50 +0000 (0:00:00.577) 0:00:46.879 ***** 2025-11-01 12:58:55.847707 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:58:55.847718 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:55.847728 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:58:55.847739 | orchestrator | 2025-11-01 12:58:55.847750 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-11-01 12:58:55.847761 | orchestrator | Saturday 01 November 2025 12:54:52 +0000 (0:00:02.228) 0:00:49.107 ***** 2025-11-01 12:58:55.847772 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:58:55.847783 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:58:55.847793 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:55.847804 | orchestrator | 2025-11-01 12:58:55.847815 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-11-01 12:58:55.847826 | orchestrator | Saturday 01 November 2025 12:54:55 +0000 (0:00:02.462) 0:00:51.569 ***** 2025-11-01 12:58:55.847836 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 12:58:55.847847 | orchestrator | 2025-11-01 12:58:55.847858 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-11-01 12:58:55.847869 | orchestrator | Saturday 01 November 2025 12:54:56 +0000 (0:00:00.976) 0:00:52.545 ***** 2025-11-01 12:58:55.847880 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:58:55.847890 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:58:55.847901 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:58:55.847912 | orchestrator | 2025-11-01 12:58:55.847922 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-11-01 12:58:55.847933 | orchestrator | Saturday 01 November 2025 12:55:00 +0000 (0:00:04.192) 0:00:56.738 ***** 2025-11-01 12:58:55.847944 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.847955 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:55.847965 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.847976 | orchestrator | 2025-11-01 12:58:55.847987 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-11-01 12:58:55.847998 | orchestrator | Saturday 01 November 2025 12:55:01 +0000 (0:00:01.228) 0:00:57.966 ***** 2025-11-01 12:58:55.848016 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.848027 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.848038 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:55.848048 | orchestrator | 2025-11-01 12:58:55.848059 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-11-01 12:58:55.848069 | orchestrator | Saturday 01 November 2025 12:55:03 +0000 (0:00:01.914) 0:00:59.881 ***** 2025-11-01 12:58:55.848080 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.848091 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.848102 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:55.848112 | orchestrator | 2025-11-01 12:58:55.848123 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-11-01 12:58:55.848140 | orchestrator | Saturday 01 November 2025 12:55:05 +0000 (0:00:02.179) 0:01:02.060 ***** 2025-11-01 12:58:55.848151 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.848162 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.848173 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.848184 | orchestrator | 2025-11-01 12:58:55.848195 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-11-01 12:58:55.848239 | orchestrator | Saturday 01 November 2025 12:55:07 +0000 (0:00:01.916) 0:01:03.976 ***** 2025-11-01 12:58:55.848250 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.848261 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.848272 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.848283 | orchestrator | 2025-11-01 12:58:55.848294 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-11-01 12:58:55.848304 | orchestrator | Saturday 01 November 2025 12:55:08 +0000 (0:00:00.661) 0:01:04.638 ***** 2025-11-01 12:58:55.848315 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:55.848326 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:58:55.848336 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:58:55.848347 | orchestrator | 2025-11-01 12:58:55.848358 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2025-11-01 12:58:55.848368 | orchestrator | Saturday 01 November 2025 12:55:11 +0000 (0:00:02.970) 0:01:07.608 ***** 2025-11-01 12:58:55.848379 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:58:55.848390 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:58:55.848400 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:58:55.848411 | orchestrator | 2025-11-01 12:58:55.848427 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2025-11-01 12:58:55.848438 | orchestrator | Saturday 01 November 2025 12:55:13 +0000 (0:00:02.544) 0:01:10.152 ***** 2025-11-01 12:58:55.848448 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:58:55.848459 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:58:55.848470 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:58:55.848480 | orchestrator | 2025-11-01 12:58:55.848491 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-11-01 12:58:55.848502 | orchestrator | Saturday 01 November 2025 12:55:15 +0000 (0:00:01.952) 0:01:12.105 ***** 2025-11-01 12:58:55.848513 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-11-01 12:58:55.848524 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-11-01 12:58:55.848535 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-11-01 12:58:55.848546 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-11-01 12:58:55.848557 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-11-01 12:58:55.848568 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-11-01 12:58:55.848586 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-11-01 12:58:55.848597 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-11-01 12:58:55.848608 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-11-01 12:58:55.848619 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-11-01 12:58:55.848629 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-11-01 12:58:55.848640 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-11-01 12:58:55.848651 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:58:55.848661 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:58:55.848672 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:58:55.848683 | orchestrator | 2025-11-01 12:58:55.848694 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-11-01 12:58:55.848704 | orchestrator | Saturday 01 November 2025 12:55:59 +0000 (0:00:44.084) 0:01:56.190 ***** 2025-11-01 12:58:55.848715 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.848726 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.848737 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.848747 | orchestrator | 2025-11-01 12:58:55.848758 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-11-01 12:58:55.848768 | orchestrator | Saturday 01 November 2025 12:56:00 +0000 (0:00:00.418) 0:01:56.608 ***** 2025-11-01 12:58:55.848779 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:55.848790 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:58:55.848800 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:58:55.848811 | orchestrator | 2025-11-01 12:58:55.848822 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-11-01 12:58:55.848833 | orchestrator | Saturday 01 November 2025 12:56:01 +0000 (0:00:01.484) 0:01:58.093 ***** 2025-11-01 12:58:55.848843 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:55.848854 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:58:55.848865 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:58:55.848875 | orchestrator | 2025-11-01 12:58:55.848892 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-11-01 12:58:55.848903 | orchestrator | Saturday 01 November 2025 12:56:02 +0000 (0:00:01.281) 0:01:59.375 ***** 2025-11-01 12:58:55.848913 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:58:55.848924 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:55.848935 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:58:55.848945 | orchestrator | 2025-11-01 12:58:55.848956 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-11-01 12:58:55.848967 | orchestrator | Saturday 01 November 2025 12:56:41 +0000 (0:00:38.598) 0:02:37.974 ***** 2025-11-01 12:58:55.848978 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:58:55.848988 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:58:55.848999 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:58:55.849010 | orchestrator | 2025-11-01 12:58:55.849020 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-11-01 12:58:55.849031 | orchestrator | Saturday 01 November 2025 12:56:42 +0000 (0:00:00.762) 0:02:38.737 ***** 2025-11-01 12:58:55.849042 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:58:55.849053 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:58:55.849063 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:58:55.849074 | orchestrator | 2025-11-01 12:58:55.849084 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-11-01 12:58:55.849102 | orchestrator | Saturday 01 November 2025 12:56:42 +0000 (0:00:00.674) 0:02:39.411 ***** 2025-11-01 12:58:55.849113 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:55.849124 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:58:55.849134 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:58:55.849145 | orchestrator | 2025-11-01 12:58:55.849160 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-11-01 12:58:55.849172 | orchestrator | Saturday 01 November 2025 12:56:43 +0000 (0:00:00.700) 0:02:40.111 ***** 2025-11-01 12:58:55.849183 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:58:55.849194 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:58:55.849224 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:58:55.849235 | orchestrator | 2025-11-01 12:58:55.849245 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-11-01 12:58:55.849256 | orchestrator | Saturday 01 November 2025 12:56:44 +0000 (0:00:00.962) 0:02:41.074 ***** 2025-11-01 12:58:55.849267 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:58:55.849278 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:58:55.849288 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:58:55.849299 | orchestrator | 2025-11-01 12:58:55.849309 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-11-01 12:58:55.849320 | orchestrator | Saturday 01 November 2025 12:56:44 +0000 (0:00:00.335) 0:02:41.409 ***** 2025-11-01 12:58:55.849331 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:55.849342 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:58:55.849353 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:58:55.849363 | orchestrator | 2025-11-01 12:58:55.849374 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-11-01 12:58:55.849385 | orchestrator | Saturday 01 November 2025 12:56:45 +0000 (0:00:00.670) 0:02:42.080 ***** 2025-11-01 12:58:55.849396 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:55.849407 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:58:55.849417 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:58:55.849428 | orchestrator | 2025-11-01 12:58:55.849439 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-11-01 12:58:55.849449 | orchestrator | Saturday 01 November 2025 12:56:46 +0000 (0:00:00.680) 0:02:42.760 ***** 2025-11-01 12:58:55.849460 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:55.849471 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:58:55.849481 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:58:55.849492 | orchestrator | 2025-11-01 12:58:55.849502 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-11-01 12:58:55.849513 | orchestrator | Saturday 01 November 2025 12:56:47 +0000 (0:00:01.241) 0:02:44.002 ***** 2025-11-01 12:58:55.849524 | orchestrator | changed: [testbed-node-0] 2025-11-01 12:58:55.849535 | orchestrator | changed: [testbed-node-1] 2025-11-01 12:58:55.849545 | orchestrator | changed: [testbed-node-2] 2025-11-01 12:58:55.849556 | orchestrator | 2025-11-01 12:58:55.849566 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-11-01 12:58:55.849577 | orchestrator | Saturday 01 November 2025 12:56:48 +0000 (0:00:00.847) 0:02:44.849 ***** 2025-11-01 12:58:55.849588 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.849598 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.849609 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.849619 | orchestrator | 2025-11-01 12:58:55.849630 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-11-01 12:58:55.849641 | orchestrator | Saturday 01 November 2025 12:56:48 +0000 (0:00:00.422) 0:02:45.272 ***** 2025-11-01 12:58:55.849651 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.849662 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.849673 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.849684 | orchestrator | 2025-11-01 12:58:55.849695 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-11-01 12:58:55.849705 | orchestrator | Saturday 01 November 2025 12:56:49 +0000 (0:00:00.391) 0:02:45.663 ***** 2025-11-01 12:58:55.849730 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:58:55.849741 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:58:55.849751 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:58:55.849762 | orchestrator | 2025-11-01 12:58:55.849773 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-11-01 12:58:55.849784 | orchestrator | Saturday 01 November 2025 12:56:50 +0000 (0:00:00.994) 0:02:46.658 ***** 2025-11-01 12:58:55.849794 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:58:55.849805 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:58:55.849816 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:58:55.849826 | orchestrator | 2025-11-01 12:58:55.849837 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-11-01 12:58:55.849848 | orchestrator | Saturday 01 November 2025 12:56:50 +0000 (0:00:00.801) 0:02:47.459 ***** 2025-11-01 12:58:55.849859 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-11-01 12:58:55.849876 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-11-01 12:58:55.849887 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-11-01 12:58:55.849898 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-11-01 12:58:55.849909 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-11-01 12:58:55.849920 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-11-01 12:58:55.849930 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-11-01 12:58:55.849941 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-11-01 12:58:55.849952 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-11-01 12:58:55.849963 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-11-01 12:58:55.849973 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-11-01 12:58:55.849984 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-11-01 12:58:55.850000 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-11-01 12:58:55.850011 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-11-01 12:58:55.850047 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-11-01 12:58:55.850058 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-11-01 12:58:55.850069 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-11-01 12:58:55.850080 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-11-01 12:58:55.850091 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-11-01 12:58:55.850102 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-11-01 12:58:55.850112 | orchestrator | 2025-11-01 12:58:55.850123 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-11-01 12:58:55.850134 | orchestrator | 2025-11-01 12:58:55.850145 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-11-01 12:58:55.850155 | orchestrator | Saturday 01 November 2025 12:56:54 +0000 (0:00:03.526) 0:02:50.985 ***** 2025-11-01 12:58:55.850166 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:58:55.850177 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:58:55.850187 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:58:55.850224 | orchestrator | 2025-11-01 12:58:55.850235 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-11-01 12:58:55.850246 | orchestrator | Saturday 01 November 2025 12:56:55 +0000 (0:00:00.618) 0:02:51.604 ***** 2025-11-01 12:58:55.850257 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:58:55.850267 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:58:55.850278 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:58:55.850289 | orchestrator | 2025-11-01 12:58:55.850299 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-11-01 12:58:55.850310 | orchestrator | Saturday 01 November 2025 12:56:55 +0000 (0:00:00.713) 0:02:52.318 ***** 2025-11-01 12:58:55.850321 | orchestrator | ok: [testbed-node-3] 2025-11-01 12:58:55.850331 | orchestrator | ok: [testbed-node-4] 2025-11-01 12:58:55.850342 | orchestrator | ok: [testbed-node-5] 2025-11-01 12:58:55.850353 | orchestrator | 2025-11-01 12:58:55.850363 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-11-01 12:58:55.850374 | orchestrator | Saturday 01 November 2025 12:56:56 +0000 (0:00:00.342) 0:02:52.660 ***** 2025-11-01 12:58:55.850385 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 12:58:55.850396 | orchestrator | 2025-11-01 12:58:55.850407 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-11-01 12:58:55.850417 | orchestrator | Saturday 01 November 2025 12:56:56 +0000 (0:00:00.794) 0:02:53.455 ***** 2025-11-01 12:58:55.850428 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:58:55.850439 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:58:55.850449 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:58:55.850460 | orchestrator | 2025-11-01 12:58:55.850471 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-11-01 12:58:55.850482 | orchestrator | Saturday 01 November 2025 12:56:57 +0000 (0:00:00.344) 0:02:53.800 ***** 2025-11-01 12:58:55.850492 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:58:55.850503 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:58:55.850514 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:58:55.850525 | orchestrator | 2025-11-01 12:58:55.850536 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-11-01 12:58:55.850546 | orchestrator | Saturday 01 November 2025 12:56:57 +0000 (0:00:00.333) 0:02:54.134 ***** 2025-11-01 12:58:55.850557 | orchestrator | skipping: [testbed-node-3] 2025-11-01 12:58:55.850568 | orchestrator | skipping: [testbed-node-4] 2025-11-01 12:58:55.850578 | orchestrator | skipping: [testbed-node-5] 2025-11-01 12:58:55.850589 | orchestrator | 2025-11-01 12:58:55.850599 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-11-01 12:58:55.850610 | orchestrator | Saturday 01 November 2025 12:56:57 +0000 (0:00:00.337) 0:02:54.471 ***** 2025-11-01 12:58:55.850621 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:58:55.850632 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:58:55.850642 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:58:55.850653 | orchestrator | 2025-11-01 12:58:55.850669 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-11-01 12:58:55.850681 | orchestrator | Saturday 01 November 2025 12:56:58 +0000 (0:00:00.927) 0:02:55.399 ***** 2025-11-01 12:58:55.850692 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:58:55.850702 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:58:55.850713 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:58:55.850724 | orchestrator | 2025-11-01 12:58:55.850734 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-11-01 12:58:55.850745 | orchestrator | Saturday 01 November 2025 12:57:00 +0000 (0:00:01.223) 0:02:56.623 ***** 2025-11-01 12:58:55.850756 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:58:55.850767 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:58:55.850777 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:58:55.850788 | orchestrator | 2025-11-01 12:58:55.850799 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-11-01 12:58:55.850816 | orchestrator | Saturday 01 November 2025 12:57:01 +0000 (0:00:01.591) 0:02:58.215 ***** 2025-11-01 12:58:55.850826 | orchestrator | changed: [testbed-node-3] 2025-11-01 12:58:55.850837 | orchestrator | changed: [testbed-node-5] 2025-11-01 12:58:55.850848 | orchestrator | changed: [testbed-node-4] 2025-11-01 12:58:55.850859 | orchestrator | 2025-11-01 12:58:55.850869 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-11-01 12:58:55.850880 | orchestrator | 2025-11-01 12:58:55.850891 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-11-01 12:58:55.850902 | orchestrator | Saturday 01 November 2025 12:57:13 +0000 (0:00:11.785) 0:03:10.000 ***** 2025-11-01 12:58:55.850917 | orchestrator | ok: [testbed-manager] 2025-11-01 12:58:55.850928 | orchestrator | 2025-11-01 12:58:55.850939 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-11-01 12:58:55.850949 | orchestrator | Saturday 01 November 2025 12:57:14 +0000 (0:00:00.729) 0:03:10.730 ***** 2025-11-01 12:58:55.850960 | orchestrator | changed: [testbed-manager] 2025-11-01 12:58:55.850971 | orchestrator | 2025-11-01 12:58:55.850982 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-11-01 12:58:55.850992 | orchestrator | Saturday 01 November 2025 12:57:14 +0000 (0:00:00.497) 0:03:11.227 ***** 2025-11-01 12:58:55.851003 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-11-01 12:58:55.851014 | orchestrator | 2025-11-01 12:58:55.851024 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-11-01 12:58:55.851035 | orchestrator | Saturday 01 November 2025 12:57:15 +0000 (0:00:00.597) 0:03:11.824 ***** 2025-11-01 12:58:55.851046 | orchestrator | changed: [testbed-manager] 2025-11-01 12:58:55.851056 | orchestrator | 2025-11-01 12:58:55.851067 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-11-01 12:58:55.851078 | orchestrator | Saturday 01 November 2025 12:57:16 +0000 (0:00:00.808) 0:03:12.632 ***** 2025-11-01 12:58:55.851089 | orchestrator | changed: [testbed-manager] 2025-11-01 12:58:55.851099 | orchestrator | 2025-11-01 12:58:55.851110 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-11-01 12:58:55.851121 | orchestrator | Saturday 01 November 2025 12:57:16 +0000 (0:00:00.791) 0:03:13.424 ***** 2025-11-01 12:58:55.851132 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-01 12:58:55.851142 | orchestrator | 2025-11-01 12:58:55.851153 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-11-01 12:58:55.851164 | orchestrator | Saturday 01 November 2025 12:57:18 +0000 (0:00:01.618) 0:03:15.043 ***** 2025-11-01 12:58:55.851174 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-01 12:58:55.851185 | orchestrator | 2025-11-01 12:58:55.851211 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-11-01 12:58:55.851223 | orchestrator | Saturday 01 November 2025 12:57:19 +0000 (0:00:00.868) 0:03:15.911 ***** 2025-11-01 12:58:55.851233 | orchestrator | changed: [testbed-manager] 2025-11-01 12:58:55.851244 | orchestrator | 2025-11-01 12:58:55.851255 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-11-01 12:58:55.851265 | orchestrator | Saturday 01 November 2025 12:57:19 +0000 (0:00:00.418) 0:03:16.330 ***** 2025-11-01 12:58:55.851276 | orchestrator | changed: [testbed-manager] 2025-11-01 12:58:55.851287 | orchestrator | 2025-11-01 12:58:55.851298 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-11-01 12:58:55.851308 | orchestrator | 2025-11-01 12:58:55.851319 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-11-01 12:58:55.851329 | orchestrator | Saturday 01 November 2025 12:57:20 +0000 (0:00:00.759) 0:03:17.089 ***** 2025-11-01 12:58:55.851340 | orchestrator | ok: [testbed-manager] 2025-11-01 12:58:55.851351 | orchestrator | 2025-11-01 12:58:55.851361 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-11-01 12:58:55.851372 | orchestrator | Saturday 01 November 2025 12:57:20 +0000 (0:00:00.186) 0:03:17.275 ***** 2025-11-01 12:58:55.851389 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-11-01 12:58:55.851400 | orchestrator | 2025-11-01 12:58:55.851410 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-11-01 12:58:55.851421 | orchestrator | Saturday 01 November 2025 12:57:21 +0000 (0:00:00.293) 0:03:17.568 ***** 2025-11-01 12:58:55.851431 | orchestrator | ok: [testbed-manager] 2025-11-01 12:58:55.851442 | orchestrator | 2025-11-01 12:58:55.851453 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-11-01 12:58:55.851464 | orchestrator | Saturday 01 November 2025 12:57:22 +0000 (0:00:01.272) 0:03:18.840 ***** 2025-11-01 12:58:55.851474 | orchestrator | ok: [testbed-manager] 2025-11-01 12:58:55.851485 | orchestrator | 2025-11-01 12:58:55.851496 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-11-01 12:58:55.851507 | orchestrator | Saturday 01 November 2025 12:57:24 +0000 (0:00:02.279) 0:03:21.120 ***** 2025-11-01 12:58:55.851517 | orchestrator | changed: [testbed-manager] 2025-11-01 12:58:55.851528 | orchestrator | 2025-11-01 12:58:55.851539 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-11-01 12:58:55.851550 | orchestrator | Saturday 01 November 2025 12:57:25 +0000 (0:00:00.953) 0:03:22.074 ***** 2025-11-01 12:58:55.851560 | orchestrator | ok: [testbed-manager] 2025-11-01 12:58:55.851572 | orchestrator | 2025-11-01 12:58:55.851587 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-11-01 12:58:55.851598 | orchestrator | Saturday 01 November 2025 12:57:26 +0000 (0:00:00.657) 0:03:22.731 ***** 2025-11-01 12:58:55.851609 | orchestrator | changed: [testbed-manager] 2025-11-01 12:58:55.851620 | orchestrator | 2025-11-01 12:58:55.851630 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-11-01 12:58:55.851641 | orchestrator | Saturday 01 November 2025 12:57:36 +0000 (0:00:10.209) 0:03:32.941 ***** 2025-11-01 12:58:55.851652 | orchestrator | changed: [testbed-manager] 2025-11-01 12:58:55.851663 | orchestrator | 2025-11-01 12:58:55.851673 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-11-01 12:58:55.851684 | orchestrator | Saturday 01 November 2025 12:57:54 +0000 (0:00:18.006) 0:03:50.947 ***** 2025-11-01 12:58:55.851695 | orchestrator | ok: [testbed-manager] 2025-11-01 12:58:55.851706 | orchestrator | 2025-11-01 12:58:55.851716 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-11-01 12:58:55.851727 | orchestrator | 2025-11-01 12:58:55.851738 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-11-01 12:58:55.851749 | orchestrator | Saturday 01 November 2025 12:57:55 +0000 (0:00:00.667) 0:03:51.615 ***** 2025-11-01 12:58:55.851759 | orchestrator | ok: [testbed-node-0] 2025-11-01 12:58:55.851770 | orchestrator | ok: [testbed-node-1] 2025-11-01 12:58:55.851781 | orchestrator | ok: [testbed-node-2] 2025-11-01 12:58:55.851791 | orchestrator | 2025-11-01 12:58:55.851802 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-11-01 12:58:55.851813 | orchestrator | Saturday 01 November 2025 12:57:55 +0000 (0:00:00.546) 0:03:52.162 ***** 2025-11-01 12:58:55.851824 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.851834 | orchestrator | skipping: [testbed-node-1] 2025-11-01 12:58:55.851845 | orchestrator | skipping: [testbed-node-2] 2025-11-01 12:58:55.851856 | orchestrator | 2025-11-01 12:58:55.851867 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-11-01 12:58:55.851878 | orchestrator | Saturday 01 November 2025 12:57:56 +0000 (0:00:00.445) 0:03:52.607 ***** 2025-11-01 12:58:55.851888 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 12:58:55.851899 | orchestrator | 2025-11-01 12:58:55.851910 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-11-01 12:58:55.851921 | orchestrator | Saturday 01 November 2025 12:57:57 +0000 (0:00:00.982) 0:03:53.589 ***** 2025-11-01 12:58:55.851931 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 12:58:55.851942 | orchestrator | 2025-11-01 12:58:55.851959 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-11-01 12:58:55.851970 | orchestrator | Saturday 01 November 2025 12:57:58 +0000 (0:00:01.072) 0:03:54.662 ***** 2025-11-01 12:58:55.851980 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.851991 | orchestrator | 2025-11-01 12:58:55.852002 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-11-01 12:58:55.852013 | orchestrator | Saturday 01 November 2025 12:57:58 +0000 (0:00:00.142) 0:03:54.805 ***** 2025-11-01 12:58:55.852023 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 12:58:55.852034 | orchestrator | 2025-11-01 12:58:55.852045 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-11-01 12:58:55.852056 | orchestrator | Saturday 01 November 2025 12:57:59 +0000 (0:00:01.228) 0:03:56.033 ***** 2025-11-01 12:58:55.852067 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.852077 | orchestrator | 2025-11-01 12:58:55.852088 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-11-01 12:58:55.852099 | orchestrator | Saturday 01 November 2025 12:57:59 +0000 (0:00:00.169) 0:03:56.203 ***** 2025-11-01 12:58:55.852110 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.852120 | orchestrator | 2025-11-01 12:58:55.852131 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-11-01 12:58:55.852142 | orchestrator | Saturday 01 November 2025 12:57:59 +0000 (0:00:00.138) 0:03:56.341 ***** 2025-11-01 12:58:55.852152 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.852163 | orchestrator | 2025-11-01 12:58:55.852174 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-11-01 12:58:55.852939 | orchestrator | Saturday 01 November 2025 12:58:00 +0000 (0:00:00.141) 0:03:56.483 ***** 2025-11-01 12:58:55.852967 | orchestrator | skipping: [testbed-node-0] 2025-11-01 12:58:55.852977 | orchestrator | 2025-11-01 12:58:55.852987 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-11-01 12:58:55.852997 | orchestrator | Saturday 01 November 2025 12:58:00 +0000 (0:00:00.128) 0:03:56.612 ***** 2025-11-01 12:58:55.853006 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-11-01 12:58:55.853016 | orchestrator | 2025-11-01 12:58:55.853025 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-11-01 12:58:55.853035 | orchestrator | Saturday 01 November 2025 12:58:05 +0000 (0:00:05.789) 0:04:02.402 ***** 2025-11-01 12:58:55.853044 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-11-01 12:58:55.853053 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-11-01 12:58:55.853063 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-11-01 12:58:55.853072 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-11-01 12:58:55.853082 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-11-01 12:58:55.853091 | orchestrator | 2025-11-01 12:58:55.853101 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-11-01 12:58:55.853110 | orchestrator | Saturday 01 November 2025 12:58:49 +0000 (0:00:43.589) 0:04:45.992 ***** 2025-11-01 12:58:55.853120 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 12:58:55.853129 | orchestrator | 2025-11-01 12:58:55.853138 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-11-01 12:58:55.853148 | orchestrator | Saturday 01 November 2025 12:58:51 +0000 (0:00:02.032) 0:04:48.024 ***** 2025-11-01 12:58:55.853167 | orchestrator | fatal: [testbed-node-0 -> localhost]: FAILED! => {"changed": false, "checksum": "e067333911ec303b1abbababa17374a0629c5a29", "msg": "Destination directory /tmp/k3s does not exist"} 2025-11-01 12:58:55.853179 | orchestrator | 2025-11-01 12:58:55.853188 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 12:58:55.853218 | orchestrator | testbed-manager : ok=18  changed=10  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 12:58:55.853239 | orchestrator | testbed-node-0 : ok=43  changed=20  unreachable=0 failed=1  skipped=24  rescued=0 ignored=0 2025-11-01 12:58:55.853251 | orchestrator | testbed-node-1 : ok=35  changed=16  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-11-01 12:58:55.853260 | orchestrator | testbed-node-2 : ok=35  changed=16  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-11-01 12:58:55.853270 | orchestrator | testbed-node-3 : ok=14  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-01 12:58:55.853280 | orchestrator | testbed-node-4 : ok=14  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-01 12:58:55.853289 | orchestrator | testbed-node-5 : ok=14  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-01 12:58:55.853299 | orchestrator | 2025-11-01 12:58:55.853308 | orchestrator | 2025-11-01 12:58:55.853318 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 12:58:55.853327 | orchestrator | Saturday 01 November 2025 12:58:53 +0000 (0:00:01.915) 0:04:49.939 ***** 2025-11-01 12:58:55.853337 | orchestrator | =============================================================================== 2025-11-01 12:58:55.853346 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 44.08s 2025-11-01 12:58:55.853356 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 43.59s 2025-11-01 12:58:55.853365 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 38.60s 2025-11-01 12:58:55.853375 | orchestrator | kubectl : Install required packages ------------------------------------ 18.01s 2025-11-01 12:58:55.853384 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.79s 2025-11-01 12:58:55.853394 | orchestrator | kubectl : Add repository Debian ---------------------------------------- 10.21s 2025-11-01 12:58:55.853403 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.09s 2025-11-01 12:58:55.853413 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.79s 2025-11-01 12:58:55.853422 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 4.99s 2025-11-01 12:58:55.853432 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 4.19s 2025-11-01 12:58:55.853441 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 4.02s 2025-11-01 12:58:55.853451 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.53s 2025-11-01 12:58:55.853460 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.97s 2025-11-01 12:58:55.853475 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.72s 2025-11-01 12:58:55.853484 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.60s 2025-11-01 12:58:55.853494 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.54s 2025-11-01 12:58:55.853503 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.46s 2025-11-01 12:58:55.853512 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.30s 2025-11-01 12:58:55.853522 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.28s 2025-11-01 12:58:55.853531 | orchestrator | k3s_server : Create /etc/rancher/k3s directory -------------------------- 2.23s 2025-11-01 12:58:58.885323 | orchestrator | 2025-11-01 12:58:58 | INFO  | Task b641f72d-3464-487c-bcad-403615981703 is in state STARTED 2025-11-01 12:58:58.890058 | orchestrator | 2025-11-01 12:58:58 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:58:58.891760 | orchestrator | 2025-11-01 12:58:58 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:58:58.894735 | orchestrator | 2025-11-01 12:58:58 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:58:58.898450 | orchestrator | 2025-11-01 12:58:58 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:58:58.899448 | orchestrator | 2025-11-01 12:58:58 | INFO  | Task 2a0c5738-cd18-476f-b25b-b167d0b45a41 is in state STARTED 2025-11-01 12:58:58.899751 | orchestrator | 2025-11-01 12:58:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:59:01.985925 | orchestrator | 2025-11-01 12:59:01 | INFO  | Task b641f72d-3464-487c-bcad-403615981703 is in state STARTED 2025-11-01 12:59:01.989372 | orchestrator | 2025-11-01 12:59:01 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:59:01.993418 | orchestrator | 2025-11-01 12:59:01 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:59:01.996718 | orchestrator | 2025-11-01 12:59:01 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:59:01.998940 | orchestrator | 2025-11-01 12:59:01 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:59:02.002238 | orchestrator | 2025-11-01 12:59:02 | INFO  | Task 2a0c5738-cd18-476f-b25b-b167d0b45a41 is in state STARTED 2025-11-01 12:59:02.002343 | orchestrator | 2025-11-01 12:59:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:59:05.098766 | orchestrator | 2025-11-01 12:59:05 | INFO  | Task b641f72d-3464-487c-bcad-403615981703 is in state STARTED 2025-11-01 12:59:05.103785 | orchestrator | 2025-11-01 12:59:05 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:59:05.104599 | orchestrator | 2025-11-01 12:59:05 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:59:05.107041 | orchestrator | 2025-11-01 12:59:05 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:59:05.109439 | orchestrator | 2025-11-01 12:59:05 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:59:05.110698 | orchestrator | 2025-11-01 12:59:05 | INFO  | Task 2a0c5738-cd18-476f-b25b-b167d0b45a41 is in state SUCCESS 2025-11-01 12:59:05.110721 | orchestrator | 2025-11-01 12:59:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:59:08.161273 | orchestrator | 2025-11-01 12:59:08 | INFO  | Task b641f72d-3464-487c-bcad-403615981703 is in state STARTED 2025-11-01 12:59:08.162432 | orchestrator | 2025-11-01 12:59:08 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:59:08.164973 | orchestrator | 2025-11-01 12:59:08 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:59:08.167346 | orchestrator | 2025-11-01 12:59:08 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:59:08.170322 | orchestrator | 2025-11-01 12:59:08 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:59:08.170345 | orchestrator | 2025-11-01 12:59:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:59:11.224415 | orchestrator | 2025-11-01 12:59:11 | INFO  | Task b641f72d-3464-487c-bcad-403615981703 is in state SUCCESS 2025-11-01 12:59:11.225792 | orchestrator | 2025-11-01 12:59:11 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:59:11.227814 | orchestrator | 2025-11-01 12:59:11 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:59:11.229291 | orchestrator | 2025-11-01 12:59:11 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:59:11.230329 | orchestrator | 2025-11-01 12:59:11 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:59:11.230352 | orchestrator | 2025-11-01 12:59:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:59:14.271585 | orchestrator | 2025-11-01 12:59:14 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:59:14.272981 | orchestrator | 2025-11-01 12:59:14 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:59:14.275343 | orchestrator | 2025-11-01 12:59:14 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:59:14.277298 | orchestrator | 2025-11-01 12:59:14 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:59:14.277325 | orchestrator | 2025-11-01 12:59:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:59:17.310667 | orchestrator | 2025-11-01 12:59:17 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:59:17.312034 | orchestrator | 2025-11-01 12:59:17 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:59:17.312928 | orchestrator | 2025-11-01 12:59:17 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:59:17.314792 | orchestrator | 2025-11-01 12:59:17 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:59:17.314820 | orchestrator | 2025-11-01 12:59:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:59:20.406578 | orchestrator | 2025-11-01 12:59:20 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:59:20.409450 | orchestrator | 2025-11-01 12:59:20 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:59:20.417475 | orchestrator | 2025-11-01 12:59:20 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:59:20.420998 | orchestrator | 2025-11-01 12:59:20 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:59:20.421854 | orchestrator | 2025-11-01 12:59:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:59:23.461771 | orchestrator | 2025-11-01 12:59:23 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:59:23.463674 | orchestrator | 2025-11-01 12:59:23 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:59:23.470075 | orchestrator | 2025-11-01 12:59:23 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:59:23.473778 | orchestrator | 2025-11-01 12:59:23 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:59:23.473810 | orchestrator | 2025-11-01 12:59:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:59:26.517735 | orchestrator | 2025-11-01 12:59:26 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:59:26.519038 | orchestrator | 2025-11-01 12:59:26 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:59:26.521476 | orchestrator | 2025-11-01 12:59:26 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:59:26.523941 | orchestrator | 2025-11-01 12:59:26 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:59:26.525132 | orchestrator | 2025-11-01 12:59:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:59:29.568474 | orchestrator | 2025-11-01 12:59:29 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:59:29.571774 | orchestrator | 2025-11-01 12:59:29 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:59:29.572628 | orchestrator | 2025-11-01 12:59:29 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:59:29.574370 | orchestrator | 2025-11-01 12:59:29 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:59:29.574580 | orchestrator | 2025-11-01 12:59:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:59:32.616114 | orchestrator | 2025-11-01 12:59:32 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:59:32.617818 | orchestrator | 2025-11-01 12:59:32 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:59:32.619932 | orchestrator | 2025-11-01 12:59:32 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:59:32.622695 | orchestrator | 2025-11-01 12:59:32 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:59:32.622724 | orchestrator | 2025-11-01 12:59:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:59:35.656485 | orchestrator | 2025-11-01 12:59:35 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:59:35.657464 | orchestrator | 2025-11-01 12:59:35 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:59:35.658760 | orchestrator | 2025-11-01 12:59:35 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:59:35.659999 | orchestrator | 2025-11-01 12:59:35 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:59:35.660159 | orchestrator | 2025-11-01 12:59:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:59:38.714501 | orchestrator | 2025-11-01 12:59:38 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:59:38.714581 | orchestrator | 2025-11-01 12:59:38 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:59:38.714596 | orchestrator | 2025-11-01 12:59:38 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:59:38.714609 | orchestrator | 2025-11-01 12:59:38 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:59:38.714620 | orchestrator | 2025-11-01 12:59:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:59:41.748367 | orchestrator | 2025-11-01 12:59:41 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:59:41.750591 | orchestrator | 2025-11-01 12:59:41 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:59:41.752069 | orchestrator | 2025-11-01 12:59:41 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:59:41.753423 | orchestrator | 2025-11-01 12:59:41 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:59:41.753446 | orchestrator | 2025-11-01 12:59:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:59:44.798150 | orchestrator | 2025-11-01 12:59:44 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:59:44.799755 | orchestrator | 2025-11-01 12:59:44 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:59:44.801853 | orchestrator | 2025-11-01 12:59:44 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:59:44.803532 | orchestrator | 2025-11-01 12:59:44 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:59:44.803583 | orchestrator | 2025-11-01 12:59:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:59:47.835505 | orchestrator | 2025-11-01 12:59:47 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:59:47.836036 | orchestrator | 2025-11-01 12:59:47 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:59:47.838686 | orchestrator | 2025-11-01 12:59:47 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:59:47.839569 | orchestrator | 2025-11-01 12:59:47 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:59:47.839618 | orchestrator | 2025-11-01 12:59:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:59:50.879786 | orchestrator | 2025-11-01 12:59:50 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:59:50.879968 | orchestrator | 2025-11-01 12:59:50 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:59:50.881586 | orchestrator | 2025-11-01 12:59:50 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:59:50.883147 | orchestrator | 2025-11-01 12:59:50 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:59:50.883319 | orchestrator | 2025-11-01 12:59:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:59:53.913359 | orchestrator | 2025-11-01 12:59:53 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:59:53.913960 | orchestrator | 2025-11-01 12:59:53 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:59:53.914978 | orchestrator | 2025-11-01 12:59:53 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:59:53.915838 | orchestrator | 2025-11-01 12:59:53 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:59:53.915859 | orchestrator | 2025-11-01 12:59:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:59:56.950351 | orchestrator | 2025-11-01 12:59:56 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:59:56.950866 | orchestrator | 2025-11-01 12:59:56 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:59:56.951731 | orchestrator | 2025-11-01 12:59:56 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:59:56.952896 | orchestrator | 2025-11-01 12:59:56 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:59:56.952914 | orchestrator | 2025-11-01 12:59:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 12:59:59.992610 | orchestrator | 2025-11-01 12:59:59 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 12:59:59.996874 | orchestrator | 2025-11-01 12:59:59 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 12:59:59.998347 | orchestrator | 2025-11-01 12:59:59 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 12:59:59.999799 | orchestrator | 2025-11-01 12:59:59 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 12:59:59.999947 | orchestrator | 2025-11-01 12:59:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:00:03.048690 | orchestrator | 2025-11-01 13:00:03 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:00:03.059541 | orchestrator | 2025-11-01 13:00:03 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 13:00:03.068860 | orchestrator | 2025-11-01 13:00:03 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:00:03.070103 | orchestrator | 2025-11-01 13:00:03 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 13:00:03.070133 | orchestrator | 2025-11-01 13:00:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:00:06.166784 | orchestrator | 2025-11-01 13:00:06 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:00:06.168964 | orchestrator | 2025-11-01 13:00:06 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 13:00:06.171364 | orchestrator | 2025-11-01 13:00:06 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:00:06.173021 | orchestrator | 2025-11-01 13:00:06 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 13:00:06.173153 | orchestrator | 2025-11-01 13:00:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:00:09.207507 | orchestrator | 2025-11-01 13:00:09 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:00:09.207581 | orchestrator | 2025-11-01 13:00:09 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 13:00:09.210159 | orchestrator | 2025-11-01 13:00:09 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:00:09.212029 | orchestrator | 2025-11-01 13:00:09 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 13:00:09.212317 | orchestrator | 2025-11-01 13:00:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:00:12.281063 | orchestrator | 2025-11-01 13:00:12 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:00:12.284465 | orchestrator | 2025-11-01 13:00:12 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 13:00:12.286098 | orchestrator | 2025-11-01 13:00:12 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:00:12.288985 | orchestrator | 2025-11-01 13:00:12 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 13:00:12.289263 | orchestrator | 2025-11-01 13:00:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:00:15.341124 | orchestrator | 2025-11-01 13:00:15 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:00:15.342641 | orchestrator | 2025-11-01 13:00:15 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 13:00:15.343743 | orchestrator | 2025-11-01 13:00:15 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:00:15.347581 | orchestrator | 2025-11-01 13:00:15 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 13:00:15.347619 | orchestrator | 2025-11-01 13:00:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:00:18.386059 | orchestrator | 2025-11-01 13:00:18 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:00:18.387171 | orchestrator | 2025-11-01 13:00:18 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 13:00:18.388332 | orchestrator | 2025-11-01 13:00:18 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:00:18.391671 | orchestrator | 2025-11-01 13:00:18 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state STARTED 2025-11-01 13:00:18.391695 | orchestrator | 2025-11-01 13:00:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:00:21.442655 | orchestrator | 2025-11-01 13:00:21 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:00:21.445305 | orchestrator | 2025-11-01 13:00:21 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 13:00:21.447857 | orchestrator | 2025-11-01 13:00:21 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:00:21.449412 | orchestrator | 2025-11-01 13:00:21 | INFO  | Task 5be886de-86e3-43bf-bd58-3b57d6708f50 is in state SUCCESS 2025-11-01 13:00:21.450538 | orchestrator | 2025-11-01 13:00:21.450572 | orchestrator | 2025-11-01 13:00:21.450584 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-11-01 13:00:21.450596 | orchestrator | 2025-11-01 13:00:21.450607 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-11-01 13:00:21.450617 | orchestrator | Saturday 01 November 2025 12:59:00 +0000 (0:00:00.248) 0:00:00.248 ***** 2025-11-01 13:00:21.450629 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-11-01 13:00:21.450640 | orchestrator | 2025-11-01 13:00:21.450650 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-11-01 13:00:21.450661 | orchestrator | Saturday 01 November 2025 12:59:01 +0000 (0:00:00.925) 0:00:01.173 ***** 2025-11-01 13:00:21.450672 | orchestrator | changed: [testbed-manager] 2025-11-01 13:00:21.450683 | orchestrator | 2025-11-01 13:00:21.450694 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-11-01 13:00:21.450705 | orchestrator | Saturday 01 November 2025 12:59:03 +0000 (0:00:02.026) 0:00:03.200 ***** 2025-11-01 13:00:21.450716 | orchestrator | changed: [testbed-manager] 2025-11-01 13:00:21.450726 | orchestrator | 2025-11-01 13:00:21.450737 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:00:21.450749 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:00:21.450761 | orchestrator | 2025-11-01 13:00:21.450772 | orchestrator | 2025-11-01 13:00:21.450782 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:00:21.450793 | orchestrator | Saturday 01 November 2025 12:59:03 +0000 (0:00:00.543) 0:00:03.743 ***** 2025-11-01 13:00:21.450803 | orchestrator | =============================================================================== 2025-11-01 13:00:21.450814 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.03s 2025-11-01 13:00:21.450825 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.93s 2025-11-01 13:00:21.450835 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.54s 2025-11-01 13:00:21.450846 | orchestrator | 2025-11-01 13:00:21.450857 | orchestrator | 2025-11-01 13:00:21.450867 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-11-01 13:00:21.450878 | orchestrator | 2025-11-01 13:00:21.450888 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-11-01 13:00:21.450899 | orchestrator | Saturday 01 November 2025 12:58:59 +0000 (0:00:00.193) 0:00:00.193 ***** 2025-11-01 13:00:21.450910 | orchestrator | ok: [testbed-manager] 2025-11-01 13:00:21.450922 | orchestrator | 2025-11-01 13:00:21.450933 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-11-01 13:00:21.450944 | orchestrator | Saturday 01 November 2025 12:59:00 +0000 (0:00:00.930) 0:00:01.124 ***** 2025-11-01 13:00:21.450954 | orchestrator | ok: [testbed-manager] 2025-11-01 13:00:21.450965 | orchestrator | 2025-11-01 13:00:21.450976 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-11-01 13:00:21.450986 | orchestrator | Saturday 01 November 2025 12:59:01 +0000 (0:00:00.802) 0:00:01.926 ***** 2025-11-01 13:00:21.450997 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-11-01 13:00:21.451008 | orchestrator | 2025-11-01 13:00:21.451018 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-11-01 13:00:21.451029 | orchestrator | Saturday 01 November 2025 12:59:02 +0000 (0:00:00.957) 0:00:02.884 ***** 2025-11-01 13:00:21.451055 | orchestrator | changed: [testbed-manager] 2025-11-01 13:00:21.451066 | orchestrator | 2025-11-01 13:00:21.451077 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-11-01 13:00:21.451088 | orchestrator | Saturday 01 November 2025 12:59:04 +0000 (0:00:02.125) 0:00:05.009 ***** 2025-11-01 13:00:21.451098 | orchestrator | changed: [testbed-manager] 2025-11-01 13:00:21.451109 | orchestrator | 2025-11-01 13:00:21.451120 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-11-01 13:00:21.451133 | orchestrator | Saturday 01 November 2025 12:59:05 +0000 (0:00:00.673) 0:00:05.683 ***** 2025-11-01 13:00:21.451145 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-01 13:00:21.451158 | orchestrator | 2025-11-01 13:00:21.451171 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-11-01 13:00:21.451184 | orchestrator | Saturday 01 November 2025 12:59:07 +0000 (0:00:01.985) 0:00:07.669 ***** 2025-11-01 13:00:21.451220 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-01 13:00:21.451233 | orchestrator | 2025-11-01 13:00:21.451260 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-11-01 13:00:21.451273 | orchestrator | Saturday 01 November 2025 12:59:08 +0000 (0:00:01.111) 0:00:08.780 ***** 2025-11-01 13:00:21.451287 | orchestrator | ok: [testbed-manager] 2025-11-01 13:00:21.451299 | orchestrator | 2025-11-01 13:00:21.451312 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-11-01 13:00:21.451324 | orchestrator | Saturday 01 November 2025 12:59:08 +0000 (0:00:00.514) 0:00:09.295 ***** 2025-11-01 13:00:21.451336 | orchestrator | ok: [testbed-manager] 2025-11-01 13:00:21.451349 | orchestrator | 2025-11-01 13:00:21.451361 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:00:21.451374 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:00:21.451386 | orchestrator | 2025-11-01 13:00:21.451398 | orchestrator | 2025-11-01 13:00:21.451411 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:00:21.451424 | orchestrator | Saturday 01 November 2025 12:59:09 +0000 (0:00:00.519) 0:00:09.814 ***** 2025-11-01 13:00:21.451436 | orchestrator | =============================================================================== 2025-11-01 13:00:21.451449 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.13s 2025-11-01 13:00:21.451462 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.99s 2025-11-01 13:00:21.451475 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.11s 2025-11-01 13:00:21.451497 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.96s 2025-11-01 13:00:21.451509 | orchestrator | Get home directory of operator user ------------------------------------- 0.93s 2025-11-01 13:00:21.451520 | orchestrator | Create .kube directory -------------------------------------------------- 0.80s 2025-11-01 13:00:21.451531 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.67s 2025-11-01 13:00:21.451541 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.52s 2025-11-01 13:00:21.451552 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.51s 2025-11-01 13:00:21.451563 | orchestrator | 2025-11-01 13:00:21.451573 | orchestrator | 2025-11-01 13:00:21.451585 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-11-01 13:00:21.451595 | orchestrator | 2025-11-01 13:00:21.451606 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-11-01 13:00:21.451617 | orchestrator | Saturday 01 November 2025 12:57:38 +0000 (0:00:00.368) 0:00:00.368 ***** 2025-11-01 13:00:21.451628 | orchestrator | ok: [localhost] => { 2025-11-01 13:00:21.451640 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-11-01 13:00:21.451651 | orchestrator | } 2025-11-01 13:00:21.451669 | orchestrator | 2025-11-01 13:00:21.451680 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-11-01 13:00:21.451691 | orchestrator | Saturday 01 November 2025 12:57:38 +0000 (0:00:00.126) 0:00:00.494 ***** 2025-11-01 13:00:21.451703 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-11-01 13:00:21.451715 | orchestrator | ...ignoring 2025-11-01 13:00:21.451726 | orchestrator | 2025-11-01 13:00:21.451737 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-11-01 13:00:21.451748 | orchestrator | Saturday 01 November 2025 12:57:43 +0000 (0:00:05.155) 0:00:05.650 ***** 2025-11-01 13:00:21.451759 | orchestrator | skipping: [localhost] 2025-11-01 13:00:21.451769 | orchestrator | 2025-11-01 13:00:21.451780 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-11-01 13:00:21.451791 | orchestrator | Saturday 01 November 2025 12:57:44 +0000 (0:00:00.184) 0:00:05.835 ***** 2025-11-01 13:00:21.451802 | orchestrator | ok: [localhost] 2025-11-01 13:00:21.451813 | orchestrator | 2025-11-01 13:00:21.451824 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:00:21.451834 | orchestrator | 2025-11-01 13:00:21.451845 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:00:21.451856 | orchestrator | Saturday 01 November 2025 12:57:44 +0000 (0:00:00.368) 0:00:06.203 ***** 2025-11-01 13:00:21.451867 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:00:21.451877 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:00:21.451888 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:00:21.451899 | orchestrator | 2025-11-01 13:00:21.451910 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:00:21.451921 | orchestrator | Saturday 01 November 2025 12:57:45 +0000 (0:00:01.113) 0:00:07.317 ***** 2025-11-01 13:00:21.451931 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-11-01 13:00:21.451943 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-11-01 13:00:21.451954 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-11-01 13:00:21.451964 | orchestrator | 2025-11-01 13:00:21.451975 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-11-01 13:00:21.451986 | orchestrator | 2025-11-01 13:00:21.451997 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-11-01 13:00:21.452008 | orchestrator | Saturday 01 November 2025 12:57:47 +0000 (0:00:01.380) 0:00:08.697 ***** 2025-11-01 13:00:21.452019 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:00:21.452030 | orchestrator | 2025-11-01 13:00:21.452041 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-11-01 13:00:21.452052 | orchestrator | Saturday 01 November 2025 12:57:48 +0000 (0:00:01.339) 0:00:10.036 ***** 2025-11-01 13:00:21.452062 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:00:21.452073 | orchestrator | 2025-11-01 13:00:21.452084 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-11-01 13:00:21.452100 | orchestrator | Saturday 01 November 2025 12:57:51 +0000 (0:00:03.441) 0:00:13.477 ***** 2025-11-01 13:00:21.452111 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:00:21.452122 | orchestrator | 2025-11-01 13:00:21.452133 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-11-01 13:00:21.452144 | orchestrator | Saturday 01 November 2025 12:57:52 +0000 (0:00:00.546) 0:00:14.023 ***** 2025-11-01 13:00:21.452154 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:00:21.452165 | orchestrator | 2025-11-01 13:00:21.452176 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-11-01 13:00:21.452187 | orchestrator | Saturday 01 November 2025 12:57:52 +0000 (0:00:00.507) 0:00:14.531 ***** 2025-11-01 13:00:21.452216 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:00:21.452228 | orchestrator | 2025-11-01 13:00:21.452239 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-11-01 13:00:21.452257 | orchestrator | Saturday 01 November 2025 12:57:53 +0000 (0:00:00.545) 0:00:15.076 ***** 2025-11-01 13:00:21.452268 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:00:21.452279 | orchestrator | 2025-11-01 13:00:21.452290 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-11-01 13:00:21.452301 | orchestrator | Saturday 01 November 2025 12:57:54 +0000 (0:00:00.973) 0:00:16.050 ***** 2025-11-01 13:00:21.452312 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:00:21.452323 | orchestrator | 2025-11-01 13:00:21.452333 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-11-01 13:00:21.452350 | orchestrator | Saturday 01 November 2025 12:57:56 +0000 (0:00:01.865) 0:00:17.915 ***** 2025-11-01 13:00:21.452361 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:00:21.452372 | orchestrator | 2025-11-01 13:00:21.452383 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-11-01 13:00:21.452393 | orchestrator | Saturday 01 November 2025 12:57:57 +0000 (0:00:01.031) 0:00:18.947 ***** 2025-11-01 13:00:21.452404 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:00:21.452415 | orchestrator | 2025-11-01 13:00:21.452426 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-11-01 13:00:21.452437 | orchestrator | Saturday 01 November 2025 12:57:57 +0000 (0:00:00.556) 0:00:19.503 ***** 2025-11-01 13:00:21.452448 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:00:21.452459 | orchestrator | 2025-11-01 13:00:21.452469 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-11-01 13:00:21.452480 | orchestrator | Saturday 01 November 2025 12:57:58 +0000 (0:00:00.733) 0:00:20.236 ***** 2025-11-01 13:00:21.452496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 13:00:21.452514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 13:00:21.452532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 13:00:21.452552 | orchestrator | 2025-11-01 13:00:21.452563 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-11-01 13:00:21.452574 | orchestrator | Saturday 01 November 2025 12:58:01 +0000 (0:00:02.660) 0:00:22.897 ***** 2025-11-01 13:00:21.452594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 13:00:21.452606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 13:00:21.452624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 13:00:21.452643 | orchestrator | 2025-11-01 13:00:21.452654 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-11-01 13:00:21.452665 | orchestrator | Saturday 01 November 2025 12:58:05 +0000 (0:00:03.823) 0:00:26.720 ***** 2025-11-01 13:00:21.452676 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-11-01 13:00:21.452687 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-11-01 13:00:21.452698 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-11-01 13:00:21.452709 | orchestrator | 2025-11-01 13:00:21.452720 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-11-01 13:00:21.452731 | orchestrator | Saturday 01 November 2025 12:58:07 +0000 (0:00:02.690) 0:00:29.410 ***** 2025-11-01 13:00:21.452742 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-11-01 13:00:21.452753 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-11-01 13:00:21.452763 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-11-01 13:00:21.452774 | orchestrator | 2025-11-01 13:00:21.452785 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-11-01 13:00:21.452801 | orchestrator | Saturday 01 November 2025 12:58:10 +0000 (0:00:02.993) 0:00:32.404 ***** 2025-11-01 13:00:21.452812 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-11-01 13:00:21.452823 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-11-01 13:00:21.452834 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-11-01 13:00:21.452845 | orchestrator | 2025-11-01 13:00:21.452855 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-11-01 13:00:21.452866 | orchestrator | Saturday 01 November 2025 12:58:12 +0000 (0:00:01.828) 0:00:34.233 ***** 2025-11-01 13:00:21.452877 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-11-01 13:00:21.452888 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-11-01 13:00:21.452899 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-11-01 13:00:21.452910 | orchestrator | 2025-11-01 13:00:21.452920 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-11-01 13:00:21.452931 | orchestrator | Saturday 01 November 2025 12:58:14 +0000 (0:00:02.195) 0:00:36.429 ***** 2025-11-01 13:00:21.452942 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-11-01 13:00:21.452953 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-11-01 13:00:21.452964 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-11-01 13:00:21.452975 | orchestrator | 2025-11-01 13:00:21.452986 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-11-01 13:00:21.452997 | orchestrator | Saturday 01 November 2025 12:58:16 +0000 (0:00:01.759) 0:00:38.188 ***** 2025-11-01 13:00:21.453008 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-11-01 13:00:21.453019 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-11-01 13:00:21.453029 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-11-01 13:00:21.453046 | orchestrator | 2025-11-01 13:00:21.453057 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-11-01 13:00:21.453068 | orchestrator | Saturday 01 November 2025 12:58:18 +0000 (0:00:02.055) 0:00:40.244 ***** 2025-11-01 13:00:21.453079 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:00:21.453090 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:00:21.453101 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:00:21.453112 | orchestrator | 2025-11-01 13:00:21.453123 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-11-01 13:00:21.453133 | orchestrator | Saturday 01 November 2025 12:58:19 +0000 (0:00:00.926) 0:00:41.170 ***** 2025-11-01 13:00:21.453150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 13:00:21.453170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 13:00:21.453182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 13:00:21.453194 | orchestrator | 2025-11-01 13:00:21.453221 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-11-01 13:00:21.453239 | orchestrator | Saturday 01 November 2025 12:58:22 +0000 (0:00:03.028) 0:00:44.198 ***** 2025-11-01 13:00:21.453250 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:00:21.453261 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:00:21.453272 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:00:21.453283 | orchestrator | 2025-11-01 13:00:21.453294 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-11-01 13:00:21.453305 | orchestrator | Saturday 01 November 2025 12:58:23 +0000 (0:00:01.000) 0:00:45.199 ***** 2025-11-01 13:00:21.453315 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:00:21.453326 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:00:21.453337 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:00:21.453348 | orchestrator | 2025-11-01 13:00:21.453359 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-11-01 13:00:21.453370 | orchestrator | Saturday 01 November 2025 12:58:30 +0000 (0:00:07.162) 0:00:52.361 ***** 2025-11-01 13:00:21.453381 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:00:21.453392 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:00:21.453403 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:00:21.453413 | orchestrator | 2025-11-01 13:00:21.453424 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-11-01 13:00:21.453435 | orchestrator | 2025-11-01 13:00:21.453446 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-11-01 13:00:21.453457 | orchestrator | Saturday 01 November 2025 12:58:31 +0000 (0:00:00.539) 0:00:52.901 ***** 2025-11-01 13:00:21.453467 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:00:21.453478 | orchestrator | 2025-11-01 13:00:21.453489 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-11-01 13:00:21.453499 | orchestrator | Saturday 01 November 2025 12:58:31 +0000 (0:00:00.713) 0:00:53.614 ***** 2025-11-01 13:00:21.453510 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:00:21.453521 | orchestrator | 2025-11-01 13:00:21.453532 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-11-01 13:00:21.453542 | orchestrator | Saturday 01 November 2025 12:58:32 +0000 (0:00:00.280) 0:00:53.895 ***** 2025-11-01 13:00:21.453553 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:00:21.453564 | orchestrator | 2025-11-01 13:00:21.453575 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-11-01 13:00:21.453585 | orchestrator | Saturday 01 November 2025 12:58:34 +0000 (0:00:01.865) 0:00:55.760 ***** 2025-11-01 13:00:21.453596 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:00:21.453607 | orchestrator | 2025-11-01 13:00:21.453623 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-11-01 13:00:21.453634 | orchestrator | 2025-11-01 13:00:21.453644 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-11-01 13:00:21.453655 | orchestrator | Saturday 01 November 2025 12:59:33 +0000 (0:00:59.815) 0:01:55.576 ***** 2025-11-01 13:00:21.453666 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:00:21.453676 | orchestrator | 2025-11-01 13:00:21.453687 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-11-01 13:00:21.453698 | orchestrator | Saturday 01 November 2025 12:59:34 +0000 (0:00:00.669) 0:01:56.245 ***** 2025-11-01 13:00:21.453709 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:00:21.453720 | orchestrator | 2025-11-01 13:00:21.453730 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-11-01 13:00:21.453741 | orchestrator | Saturday 01 November 2025 12:59:34 +0000 (0:00:00.277) 0:01:56.523 ***** 2025-11-01 13:00:21.453752 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:00:21.453763 | orchestrator | 2025-11-01 13:00:21.453773 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-11-01 13:00:21.453784 | orchestrator | Saturday 01 November 2025 12:59:37 +0000 (0:00:02.314) 0:01:58.837 ***** 2025-11-01 13:00:21.453795 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:00:21.453817 | orchestrator | 2025-11-01 13:00:21.453828 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-11-01 13:00:21.453839 | orchestrator | 2025-11-01 13:00:21.453849 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-11-01 13:00:21.453860 | orchestrator | Saturday 01 November 2025 12:59:57 +0000 (0:00:20.239) 0:02:19.076 ***** 2025-11-01 13:00:21.453871 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:00:21.453881 | orchestrator | 2025-11-01 13:00:21.453898 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-11-01 13:00:21.453909 | orchestrator | Saturday 01 November 2025 12:59:58 +0000 (0:00:00.641) 0:02:19.718 ***** 2025-11-01 13:00:21.453920 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:00:21.453931 | orchestrator | 2025-11-01 13:00:21.453942 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-11-01 13:00:21.453953 | orchestrator | Saturday 01 November 2025 12:59:58 +0000 (0:00:00.277) 0:02:19.996 ***** 2025-11-01 13:00:21.453964 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:00:21.453975 | orchestrator | 2025-11-01 13:00:21.453986 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-11-01 13:00:21.453996 | orchestrator | Saturday 01 November 2025 12:59:59 +0000 (0:00:01.624) 0:02:21.620 ***** 2025-11-01 13:00:21.454007 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:00:21.454070 | orchestrator | 2025-11-01 13:00:21.454084 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-11-01 13:00:21.454095 | orchestrator | 2025-11-01 13:00:21.454106 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-11-01 13:00:21.454117 | orchestrator | Saturday 01 November 2025 13:00:16 +0000 (0:00:16.621) 0:02:38.242 ***** 2025-11-01 13:00:21.454127 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:00:21.454138 | orchestrator | 2025-11-01 13:00:21.454149 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-11-01 13:00:21.454159 | orchestrator | Saturday 01 November 2025 13:00:17 +0000 (0:00:00.729) 0:02:38.972 ***** 2025-11-01 13:00:21.454170 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-11-01 13:00:21.454181 | orchestrator | enable_outward_rabbitmq_True 2025-11-01 13:00:21.454192 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-11-01 13:00:21.454222 | orchestrator | outward_rabbitmq_restart 2025-11-01 13:00:21.454233 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:00:21.454244 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:00:21.454255 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:00:21.454266 | orchestrator | 2025-11-01 13:00:21.454276 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-11-01 13:00:21.454287 | orchestrator | skipping: no hosts matched 2025-11-01 13:00:21.454298 | orchestrator | 2025-11-01 13:00:21.454308 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-11-01 13:00:21.454319 | orchestrator | skipping: no hosts matched 2025-11-01 13:00:21.454330 | orchestrator | 2025-11-01 13:00:21.454341 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-11-01 13:00:21.454351 | orchestrator | skipping: no hosts matched 2025-11-01 13:00:21.454362 | orchestrator | 2025-11-01 13:00:21.454373 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:00:21.454384 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-11-01 13:00:21.454396 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-11-01 13:00:21.454407 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:00:21.454418 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:00:21.454435 | orchestrator | 2025-11-01 13:00:21.454446 | orchestrator | 2025-11-01 13:00:21.454457 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:00:21.454468 | orchestrator | Saturday 01 November 2025 13:00:20 +0000 (0:00:02.857) 0:02:41.830 ***** 2025-11-01 13:00:21.454479 | orchestrator | =============================================================================== 2025-11-01 13:00:21.454489 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 96.68s 2025-11-01 13:00:21.454500 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.16s 2025-11-01 13:00:21.454511 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.81s 2025-11-01 13:00:21.454527 | orchestrator | Check RabbitMQ service -------------------------------------------------- 5.16s 2025-11-01 13:00:21.454538 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.82s 2025-11-01 13:00:21.454549 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 3.44s 2025-11-01 13:00:21.454559 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 3.03s 2025-11-01 13:00:21.454570 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.99s 2025-11-01 13:00:21.454581 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.86s 2025-11-01 13:00:21.454592 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.69s 2025-11-01 13:00:21.454602 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 2.66s 2025-11-01 13:00:21.454613 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.20s 2025-11-01 13:00:21.454624 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.06s 2025-11-01 13:00:21.454634 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.02s 2025-11-01 13:00:21.454645 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.87s 2025-11-01 13:00:21.454656 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.83s 2025-11-01 13:00:21.454667 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.76s 2025-11-01 13:00:21.454684 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.38s 2025-11-01 13:00:21.454695 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.34s 2025-11-01 13:00:21.454705 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.11s 2025-11-01 13:00:21.454716 | orchestrator | 2025-11-01 13:00:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:00:24.507657 | orchestrator | 2025-11-01 13:00:24 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:00:24.509352 | orchestrator | 2025-11-01 13:00:24 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 13:00:24.510404 | orchestrator | 2025-11-01 13:00:24 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:00:24.510532 | orchestrator | 2025-11-01 13:00:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:00:27.560195 | orchestrator | 2025-11-01 13:00:27 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:00:27.561677 | orchestrator | 2025-11-01 13:00:27 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 13:00:27.563684 | orchestrator | 2025-11-01 13:00:27 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:00:27.563721 | orchestrator | 2025-11-01 13:00:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:00:30.606282 | orchestrator | 2025-11-01 13:00:30 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:00:30.608326 | orchestrator | 2025-11-01 13:00:30 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 13:00:30.609230 | orchestrator | 2025-11-01 13:00:30 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:00:30.609382 | orchestrator | 2025-11-01 13:00:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:00:33.648994 | orchestrator | 2025-11-01 13:00:33 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:00:33.650333 | orchestrator | 2025-11-01 13:00:33 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 13:00:33.651839 | orchestrator | 2025-11-01 13:00:33 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:00:33.651881 | orchestrator | 2025-11-01 13:00:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:00:36.699662 | orchestrator | 2025-11-01 13:00:36 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:00:36.700653 | orchestrator | 2025-11-01 13:00:36 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 13:00:36.701378 | orchestrator | 2025-11-01 13:00:36 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:00:36.701404 | orchestrator | 2025-11-01 13:00:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:00:39.749818 | orchestrator | 2025-11-01 13:00:39 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:00:39.750672 | orchestrator | 2025-11-01 13:00:39 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 13:00:39.753124 | orchestrator | 2025-11-01 13:00:39 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:00:39.753330 | orchestrator | 2025-11-01 13:00:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:00:42.794271 | orchestrator | 2025-11-01 13:00:42 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:00:42.798410 | orchestrator | 2025-11-01 13:00:42 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 13:00:42.799912 | orchestrator | 2025-11-01 13:00:42 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:00:42.799935 | orchestrator | 2025-11-01 13:00:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:00:45.842667 | orchestrator | 2025-11-01 13:00:45 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:00:45.843826 | orchestrator | 2025-11-01 13:00:45 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 13:00:45.844904 | orchestrator | 2025-11-01 13:00:45 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:00:45.844924 | orchestrator | 2025-11-01 13:00:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:00:48.889446 | orchestrator | 2025-11-01 13:00:48 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:00:48.891444 | orchestrator | 2025-11-01 13:00:48 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 13:00:48.893339 | orchestrator | 2025-11-01 13:00:48 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:00:48.893361 | orchestrator | 2025-11-01 13:00:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:00:51.941374 | orchestrator | 2025-11-01 13:00:51 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:00:51.942887 | orchestrator | 2025-11-01 13:00:51 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 13:00:51.944371 | orchestrator | 2025-11-01 13:00:51 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:00:51.944393 | orchestrator | 2025-11-01 13:00:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:00:54.982405 | orchestrator | 2025-11-01 13:00:54 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:00:54.984314 | orchestrator | 2025-11-01 13:00:54 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 13:00:54.985402 | orchestrator | 2025-11-01 13:00:54 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:00:54.985428 | orchestrator | 2025-11-01 13:00:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:00:58.022112 | orchestrator | 2025-11-01 13:00:58 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:00:58.026167 | orchestrator | 2025-11-01 13:00:58 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state STARTED 2025-11-01 13:00:58.029126 | orchestrator | 2025-11-01 13:00:58 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:00:58.029152 | orchestrator | 2025-11-01 13:00:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:01:01.077268 | orchestrator | 2025-11-01 13:01:01 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:01:01.080489 | orchestrator | 2025-11-01 13:01:01 | INFO  | Task 781faa6c-7c63-48c2-96c8-3e95733481ca is in state SUCCESS 2025-11-01 13:01:01.081621 | orchestrator | 2025-11-01 13:01:01.081659 | orchestrator | 2025-11-01 13:01:01.081672 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:01:01.081684 | orchestrator | 2025-11-01 13:01:01.081696 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:01:01.081708 | orchestrator | Saturday 01 November 2025 12:58:34 +0000 (0:00:00.243) 0:00:00.243 ***** 2025-11-01 13:01:01.081719 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:01:01.081731 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:01:01.081743 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:01:01.081753 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:01:01.081764 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:01:01.081789 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:01:01.081800 | orchestrator | 2025-11-01 13:01:01.081812 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:01:01.081823 | orchestrator | Saturday 01 November 2025 12:58:35 +0000 (0:00:00.956) 0:00:01.199 ***** 2025-11-01 13:01:01.081834 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-11-01 13:01:01.081846 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-11-01 13:01:01.082006 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-11-01 13:01:01.082532 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-11-01 13:01:01.082555 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-11-01 13:01:01.082567 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-11-01 13:01:01.082579 | orchestrator | 2025-11-01 13:01:01.082591 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-11-01 13:01:01.082603 | orchestrator | 2025-11-01 13:01:01.082631 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-11-01 13:01:01.082644 | orchestrator | Saturday 01 November 2025 12:58:36 +0000 (0:00:01.280) 0:00:02.480 ***** 2025-11-01 13:01:01.082657 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:01:01.082669 | orchestrator | 2025-11-01 13:01:01.082681 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-11-01 13:01:01.082692 | orchestrator | Saturday 01 November 2025 12:58:38 +0000 (0:00:01.942) 0:00:04.422 ***** 2025-11-01 13:01:01.082730 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.082747 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.082759 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.082771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.082782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.082794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.082806 | orchestrator | 2025-11-01 13:01:01.082830 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-11-01 13:01:01.082843 | orchestrator | Saturday 01 November 2025 12:58:40 +0000 (0:00:01.877) 0:00:06.300 ***** 2025-11-01 13:01:01.082855 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.082867 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.082884 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.082904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.082916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.082928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.082940 | orchestrator | 2025-11-01 13:01:01.082952 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-11-01 13:01:01.082964 | orchestrator | Saturday 01 November 2025 12:58:42 +0000 (0:00:01.850) 0:00:08.150 ***** 2025-11-01 13:01:01.082976 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.082988 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.083018 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.083031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.083043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.083066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.083079 | orchestrator | 2025-11-01 13:01:01.083091 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-11-01 13:01:01.083102 | orchestrator | Saturday 01 November 2025 12:58:44 +0000 (0:00:01.550) 0:00:09.701 ***** 2025-11-01 13:01:01.083115 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.083127 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.083139 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.083151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.083163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.083175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.083187 | orchestrator | 2025-11-01 13:01:01.083230 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-11-01 13:01:01.083243 | orchestrator | Saturday 01 November 2025 12:58:46 +0000 (0:00:02.896) 0:00:12.598 ***** 2025-11-01 13:01:01.083255 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.083284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.083301 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.083313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.083325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.083337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.083349 | orchestrator | 2025-11-01 13:01:01.083361 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-11-01 13:01:01.083373 | orchestrator | Saturday 01 November 2025 12:58:49 +0000 (0:00:02.246) 0:00:14.844 ***** 2025-11-01 13:01:01.083385 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:01:01.083397 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:01:01.083408 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:01:01.083420 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:01:01.083432 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:01:01.083443 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:01:01.083455 | orchestrator | 2025-11-01 13:01:01.083466 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-11-01 13:01:01.083478 | orchestrator | Saturday 01 November 2025 12:58:52 +0000 (0:00:02.953) 0:00:17.798 ***** 2025-11-01 13:01:01.083489 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-11-01 13:01:01.083501 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-11-01 13:01:01.083513 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-11-01 13:01:01.083524 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-01 13:01:01.083536 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-11-01 13:01:01.083547 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-11-01 13:01:01.083565 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-11-01 13:01:01.083577 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-01 13:01:01.083594 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-01 13:01:01.083606 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-01 13:01:01.083618 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-01 13:01:01.083631 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-01 13:01:01.083643 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-11-01 13:01:01.083654 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-01 13:01:01.083666 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-01 13:01:01.083678 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-01 13:01:01.083690 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-01 13:01:01.083707 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-01 13:01:01.083718 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-11-01 13:01:01.083730 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-01 13:01:01.083741 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-01 13:01:01.083753 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-01 13:01:01.083765 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-01 13:01:01.083776 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-01 13:01:01.083788 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-11-01 13:01:01.083799 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-01 13:01:01.083811 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-01 13:01:01.083823 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-01 13:01:01.083834 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-01 13:01:01.083846 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-01 13:01:01.083857 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-11-01 13:01:01.083869 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-01 13:01:01.083880 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-11-01 13:01:01.083892 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-01 13:01:01.083903 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-01 13:01:01.083915 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-01 13:01:01.083933 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-11-01 13:01:01.083945 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-11-01 13:01:01.083956 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-11-01 13:01:01.083968 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-11-01 13:01:01.083980 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-11-01 13:01:01.083991 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-11-01 13:01:01.084003 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-11-01 13:01:01.084014 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-11-01 13:01:01.084032 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-11-01 13:01:01.084043 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-11-01 13:01:01.084055 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-11-01 13:01:01.084067 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-11-01 13:01:01.084078 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-11-01 13:01:01.084090 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-11-01 13:01:01.084102 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-11-01 13:01:01.084113 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-11-01 13:01:01.084129 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-11-01 13:01:01.084141 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-11-01 13:01:01.084153 | orchestrator | 2025-11-01 13:01:01.084164 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-01 13:01:01.084176 | orchestrator | Saturday 01 November 2025 12:59:14 +0000 (0:00:22.295) 0:00:40.094 ***** 2025-11-01 13:01:01.084188 | orchestrator | 2025-11-01 13:01:01.084253 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-01 13:01:01.084267 | orchestrator | Saturday 01 November 2025 12:59:14 +0000 (0:00:00.072) 0:00:40.167 ***** 2025-11-01 13:01:01.084279 | orchestrator | 2025-11-01 13:01:01.084290 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-01 13:01:01.084302 | orchestrator | Saturday 01 November 2025 12:59:14 +0000 (0:00:00.080) 0:00:40.247 ***** 2025-11-01 13:01:01.084313 | orchestrator | 2025-11-01 13:01:01.084324 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-01 13:01:01.084334 | orchestrator | Saturday 01 November 2025 12:59:14 +0000 (0:00:00.065) 0:00:40.312 ***** 2025-11-01 13:01:01.084344 | orchestrator | 2025-11-01 13:01:01.084354 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-01 13:01:01.084364 | orchestrator | Saturday 01 November 2025 12:59:14 +0000 (0:00:00.070) 0:00:40.382 ***** 2025-11-01 13:01:01.084381 | orchestrator | 2025-11-01 13:01:01.084391 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-11-01 13:01:01.084401 | orchestrator | Saturday 01 November 2025 12:59:14 +0000 (0:00:00.076) 0:00:40.459 ***** 2025-11-01 13:01:01.084411 | orchestrator | 2025-11-01 13:01:01.084421 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-11-01 13:01:01.084432 | orchestrator | Saturday 01 November 2025 12:59:14 +0000 (0:00:00.069) 0:00:40.528 ***** 2025-11-01 13:01:01.084442 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:01:01.084452 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:01:01.084462 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:01:01.084472 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:01:01.084482 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:01:01.084492 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:01:01.084502 | orchestrator | 2025-11-01 13:01:01.084512 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-11-01 13:01:01.084522 | orchestrator | Saturday 01 November 2025 12:59:16 +0000 (0:00:01.922) 0:00:42.450 ***** 2025-11-01 13:01:01.084532 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:01:01.084542 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:01:01.084553 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:01:01.084563 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:01:01.084573 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:01:01.084583 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:01:01.084593 | orchestrator | 2025-11-01 13:01:01.084603 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-11-01 13:01:01.084613 | orchestrator | 2025-11-01 13:01:01.084624 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-11-01 13:01:01.084634 | orchestrator | Saturday 01 November 2025 12:59:45 +0000 (0:00:29.187) 0:01:11.638 ***** 2025-11-01 13:01:01.084644 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:01:01.084654 | orchestrator | 2025-11-01 13:01:01.084664 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-11-01 13:01:01.084674 | orchestrator | Saturday 01 November 2025 12:59:46 +0000 (0:00:00.848) 0:01:12.486 ***** 2025-11-01 13:01:01.084684 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:01:01.084695 | orchestrator | 2025-11-01 13:01:01.084705 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-11-01 13:01:01.084715 | orchestrator | Saturday 01 November 2025 12:59:47 +0000 (0:00:00.692) 0:01:13.178 ***** 2025-11-01 13:01:01.084725 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:01:01.084735 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:01:01.084745 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:01:01.084755 | orchestrator | 2025-11-01 13:01:01.084765 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-11-01 13:01:01.084775 | orchestrator | Saturday 01 November 2025 12:59:48 +0000 (0:00:01.365) 0:01:14.543 ***** 2025-11-01 13:01:01.084785 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:01:01.084795 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:01:01.084805 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:01:01.084821 | orchestrator | 2025-11-01 13:01:01.084831 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-11-01 13:01:01.084841 | orchestrator | Saturday 01 November 2025 12:59:49 +0000 (0:00:00.740) 0:01:15.284 ***** 2025-11-01 13:01:01.084851 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:01:01.084861 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:01:01.084871 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:01:01.084881 | orchestrator | 2025-11-01 13:01:01.084891 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-11-01 13:01:01.084902 | orchestrator | Saturday 01 November 2025 12:59:50 +0000 (0:00:00.419) 0:01:15.703 ***** 2025-11-01 13:01:01.084912 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:01:01.084928 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:01:01.084938 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:01:01.084948 | orchestrator | 2025-11-01 13:01:01.084958 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-11-01 13:01:01.084968 | orchestrator | Saturday 01 November 2025 12:59:50 +0000 (0:00:00.365) 0:01:16.069 ***** 2025-11-01 13:01:01.084978 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:01:01.084988 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:01:01.084998 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:01:01.085008 | orchestrator | 2025-11-01 13:01:01.085018 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-11-01 13:01:01.085028 | orchestrator | Saturday 01 November 2025 12:59:51 +0000 (0:00:00.623) 0:01:16.692 ***** 2025-11-01 13:01:01.085038 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:01:01.085048 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.085058 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.085068 | orchestrator | 2025-11-01 13:01:01.085083 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-11-01 13:01:01.085093 | orchestrator | Saturday 01 November 2025 12:59:51 +0000 (0:00:00.357) 0:01:17.050 ***** 2025-11-01 13:01:01.085103 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:01:01.085114 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.085124 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.085134 | orchestrator | 2025-11-01 13:01:01.085144 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-11-01 13:01:01.085155 | orchestrator | Saturday 01 November 2025 12:59:51 +0000 (0:00:00.355) 0:01:17.405 ***** 2025-11-01 13:01:01.085165 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:01:01.085175 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.085185 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.085195 | orchestrator | 2025-11-01 13:01:01.085219 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-11-01 13:01:01.085230 | orchestrator | Saturday 01 November 2025 12:59:52 +0000 (0:00:00.361) 0:01:17.767 ***** 2025-11-01 13:01:01.085240 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:01:01.085250 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.085261 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.085271 | orchestrator | 2025-11-01 13:01:01.085281 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-11-01 13:01:01.085291 | orchestrator | Saturday 01 November 2025 12:59:52 +0000 (0:00:00.578) 0:01:18.346 ***** 2025-11-01 13:01:01.085301 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:01:01.085312 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.085322 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.085332 | orchestrator | 2025-11-01 13:01:01.085342 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-11-01 13:01:01.085352 | orchestrator | Saturday 01 November 2025 12:59:53 +0000 (0:00:00.406) 0:01:18.752 ***** 2025-11-01 13:01:01.085363 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:01:01.085373 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.085383 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.085393 | orchestrator | 2025-11-01 13:01:01.085403 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-11-01 13:01:01.085414 | orchestrator | Saturday 01 November 2025 12:59:53 +0000 (0:00:00.354) 0:01:19.106 ***** 2025-11-01 13:01:01.085424 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:01:01.085434 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.085445 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.085455 | orchestrator | 2025-11-01 13:01:01.085465 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-11-01 13:01:01.085475 | orchestrator | Saturday 01 November 2025 12:59:53 +0000 (0:00:00.385) 0:01:19.492 ***** 2025-11-01 13:01:01.085486 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:01:01.085496 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.085514 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.085525 | orchestrator | 2025-11-01 13:01:01.085535 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-11-01 13:01:01.085545 | orchestrator | Saturday 01 November 2025 12:59:54 +0000 (0:00:00.590) 0:01:20.082 ***** 2025-11-01 13:01:01.085555 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:01:01.085565 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.085576 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.085586 | orchestrator | 2025-11-01 13:01:01.085596 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-11-01 13:01:01.085606 | orchestrator | Saturday 01 November 2025 12:59:54 +0000 (0:00:00.317) 0:01:20.399 ***** 2025-11-01 13:01:01.085616 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:01:01.085627 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.085637 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.085647 | orchestrator | 2025-11-01 13:01:01.085657 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-11-01 13:01:01.085668 | orchestrator | Saturday 01 November 2025 12:59:55 +0000 (0:00:00.331) 0:01:20.730 ***** 2025-11-01 13:01:01.085678 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:01:01.085688 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.085698 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.085708 | orchestrator | 2025-11-01 13:01:01.085718 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-11-01 13:01:01.085729 | orchestrator | Saturday 01 November 2025 12:59:55 +0000 (0:00:00.420) 0:01:21.151 ***** 2025-11-01 13:01:01.085739 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:01:01.085749 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.085766 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.085776 | orchestrator | 2025-11-01 13:01:01.085786 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-11-01 13:01:01.085797 | orchestrator | Saturday 01 November 2025 12:59:55 +0000 (0:00:00.381) 0:01:21.532 ***** 2025-11-01 13:01:01.085807 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:01:01.085817 | orchestrator | 2025-11-01 13:01:01.085828 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-11-01 13:01:01.085838 | orchestrator | Saturday 01 November 2025 12:59:56 +0000 (0:00:01.009) 0:01:22.542 ***** 2025-11-01 13:01:01.085848 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:01:01.085858 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:01:01.085868 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:01:01.085879 | orchestrator | 2025-11-01 13:01:01.085889 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-11-01 13:01:01.085899 | orchestrator | Saturday 01 November 2025 12:59:57 +0000 (0:00:00.785) 0:01:23.328 ***** 2025-11-01 13:01:01.085909 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:01:01.085919 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:01:01.085930 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:01:01.085940 | orchestrator | 2025-11-01 13:01:01.085950 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-11-01 13:01:01.085960 | orchestrator | Saturday 01 November 2025 12:59:58 +0000 (0:00:00.607) 0:01:23.935 ***** 2025-11-01 13:01:01.085970 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:01:01.085981 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.085995 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.086005 | orchestrator | 2025-11-01 13:01:01.086043 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-11-01 13:01:01.086055 | orchestrator | Saturday 01 November 2025 12:59:58 +0000 (0:00:00.595) 0:01:24.531 ***** 2025-11-01 13:01:01.086065 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:01:01.086075 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.086084 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.086094 | orchestrator | 2025-11-01 13:01:01.086104 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-11-01 13:01:01.086120 | orchestrator | Saturday 01 November 2025 12:59:59 +0000 (0:00:00.394) 0:01:24.926 ***** 2025-11-01 13:01:01.086130 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:01:01.086140 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.086149 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.086159 | orchestrator | 2025-11-01 13:01:01.086168 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-11-01 13:01:01.086178 | orchestrator | Saturday 01 November 2025 12:59:59 +0000 (0:00:00.362) 0:01:25.288 ***** 2025-11-01 13:01:01.086188 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:01:01.086243 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.086254 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.086264 | orchestrator | 2025-11-01 13:01:01.086274 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-11-01 13:01:01.086283 | orchestrator | Saturday 01 November 2025 12:59:59 +0000 (0:00:00.390) 0:01:25.679 ***** 2025-11-01 13:01:01.086293 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:01:01.086301 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.086309 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.086316 | orchestrator | 2025-11-01 13:01:01.086324 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-11-01 13:01:01.086332 | orchestrator | Saturday 01 November 2025 13:00:00 +0000 (0:00:00.688) 0:01:26.367 ***** 2025-11-01 13:01:01.086340 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:01:01.086348 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.086356 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.086363 | orchestrator | 2025-11-01 13:01:01.086371 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-11-01 13:01:01.086379 | orchestrator | Saturday 01 November 2025 13:00:01 +0000 (0:00:00.405) 0:01:26.773 ***** 2025-11-01 13:01:01.086388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086482 | orchestrator | 2025-11-01 13:01:01.086490 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-11-01 13:01:01.086498 | orchestrator | Saturday 01 November 2025 13:00:02 +0000 (0:00:01.469) 0:01:28.243 ***** 2025-11-01 13:01:01.086507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086596 | orchestrator | 2025-11-01 13:01:01.086604 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-11-01 13:01:01.086612 | orchestrator | Saturday 01 November 2025 13:00:07 +0000 (0:00:04.976) 0:01:33.219 ***** 2025-11-01 13:01:01.086620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.086709 | orchestrator | 2025-11-01 13:01:01.086717 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-01 13:01:01.086725 | orchestrator | Saturday 01 November 2025 13:00:10 +0000 (0:00:02.660) 0:01:35.880 ***** 2025-11-01 13:01:01.086733 | orchestrator | 2025-11-01 13:01:01.086741 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-01 13:01:01.086748 | orchestrator | Saturday 01 November 2025 13:00:10 +0000 (0:00:00.128) 0:01:36.008 ***** 2025-11-01 13:01:01.086756 | orchestrator | 2025-11-01 13:01:01.086764 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-01 13:01:01.086772 | orchestrator | Saturday 01 November 2025 13:00:10 +0000 (0:00:00.074) 0:01:36.083 ***** 2025-11-01 13:01:01.086780 | orchestrator | 2025-11-01 13:01:01.086787 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-11-01 13:01:01.086795 | orchestrator | Saturday 01 November 2025 13:00:10 +0000 (0:00:00.084) 0:01:36.167 ***** 2025-11-01 13:01:01.086803 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:01:01.086811 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:01:01.086819 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:01:01.086827 | orchestrator | 2025-11-01 13:01:01.086835 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-11-01 13:01:01.086843 | orchestrator | Saturday 01 November 2025 13:00:13 +0000 (0:00:03.021) 0:01:39.188 ***** 2025-11-01 13:01:01.086851 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:01:01.086859 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:01:01.086867 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:01:01.086874 | orchestrator | 2025-11-01 13:01:01.086882 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-11-01 13:01:01.086890 | orchestrator | Saturday 01 November 2025 13:00:16 +0000 (0:00:02.539) 0:01:41.727 ***** 2025-11-01 13:01:01.086898 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:01:01.086906 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:01:01.086914 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:01:01.086922 | orchestrator | 2025-11-01 13:01:01.086930 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-11-01 13:01:01.086937 | orchestrator | Saturday 01 November 2025 13:00:18 +0000 (0:00:02.780) 0:01:44.509 ***** 2025-11-01 13:01:01.086945 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:01:01.086953 | orchestrator | 2025-11-01 13:01:01.086961 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-11-01 13:01:01.086974 | orchestrator | Saturday 01 November 2025 13:00:19 +0000 (0:00:00.483) 0:01:44.993 ***** 2025-11-01 13:01:01.086982 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:01:01.086990 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:01:01.086998 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:01:01.087006 | orchestrator | 2025-11-01 13:01:01.087014 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-11-01 13:01:01.087022 | orchestrator | Saturday 01 November 2025 13:00:20 +0000 (0:00:01.240) 0:01:46.234 ***** 2025-11-01 13:01:01.087030 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.087037 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.087045 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:01:01.087053 | orchestrator | 2025-11-01 13:01:01.087061 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-11-01 13:01:01.087068 | orchestrator | Saturday 01 November 2025 13:00:21 +0000 (0:00:00.669) 0:01:46.904 ***** 2025-11-01 13:01:01.087076 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:01:01.087084 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:01:01.087092 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:01:01.087100 | orchestrator | 2025-11-01 13:01:01.087108 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-11-01 13:01:01.087116 | orchestrator | Saturday 01 November 2025 13:00:22 +0000 (0:00:00.800) 0:01:47.704 ***** 2025-11-01 13:01:01.087123 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.087131 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.087139 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:01:01.087147 | orchestrator | 2025-11-01 13:01:01.087155 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-11-01 13:01:01.087163 | orchestrator | Saturday 01 November 2025 13:00:22 +0000 (0:00:00.683) 0:01:48.388 ***** 2025-11-01 13:01:01.087171 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:01:01.087179 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:01:01.087190 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:01:01.087215 | orchestrator | 2025-11-01 13:01:01.087223 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-11-01 13:01:01.087231 | orchestrator | Saturday 01 November 2025 13:00:24 +0000 (0:00:01.454) 0:01:49.842 ***** 2025-11-01 13:01:01.087239 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:01:01.087247 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:01:01.087255 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:01:01.087263 | orchestrator | 2025-11-01 13:01:01.087270 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-11-01 13:01:01.087278 | orchestrator | Saturday 01 November 2025 13:00:25 +0000 (0:00:01.013) 0:01:50.856 ***** 2025-11-01 13:01:01.087286 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:01:01.087294 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:01:01.087302 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:01:01.087310 | orchestrator | 2025-11-01 13:01:01.087318 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-11-01 13:01:01.087325 | orchestrator | Saturday 01 November 2025 13:00:25 +0000 (0:00:00.422) 0:01:51.278 ***** 2025-11-01 13:01:01.087334 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087346 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087355 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087369 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087378 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087386 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087394 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087403 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087416 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087424 | orchestrator | 2025-11-01 13:01:01.087432 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-11-01 13:01:01.087440 | orchestrator | Saturday 01 November 2025 13:00:27 +0000 (0:00:01.507) 0:01:52.786 ***** 2025-11-01 13:01:01.087448 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087456 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087469 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087483 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087508 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087532 | orchestrator | 2025-11-01 13:01:01.087540 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-11-01 13:01:01.087548 | orchestrator | Saturday 01 November 2025 13:00:31 +0000 (0:00:03.944) 0:01:56.731 ***** 2025-11-01 13:01:01.087560 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087569 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087577 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087612 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087637 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:01:01.087645 | orchestrator | 2025-11-01 13:01:01.087653 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-01 13:01:01.087661 | orchestrator | Saturday 01 November 2025 13:00:34 +0000 (0:00:03.078) 0:01:59.809 ***** 2025-11-01 13:01:01.087669 | orchestrator | 2025-11-01 13:01:01.087677 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-01 13:01:01.087684 | orchestrator | Saturday 01 November 2025 13:00:34 +0000 (0:00:00.076) 0:01:59.886 ***** 2025-11-01 13:01:01.087692 | orchestrator | 2025-11-01 13:01:01.087700 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-11-01 13:01:01.087708 | orchestrator | Saturday 01 November 2025 13:00:34 +0000 (0:00:00.073) 0:01:59.960 ***** 2025-11-01 13:01:01.087716 | orchestrator | 2025-11-01 13:01:01.087724 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-11-01 13:01:01.087731 | orchestrator | Saturday 01 November 2025 13:00:34 +0000 (0:00:00.067) 0:02:00.027 ***** 2025-11-01 13:01:01.087739 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:01:01.087747 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:01:01.087755 | orchestrator | 2025-11-01 13:01:01.087766 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-11-01 13:01:01.087774 | orchestrator | Saturday 01 November 2025 13:00:40 +0000 (0:00:06.313) 0:02:06.340 ***** 2025-11-01 13:01:01.087787 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:01:01.087795 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:01:01.087803 | orchestrator | 2025-11-01 13:01:01.087811 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-11-01 13:01:01.087819 | orchestrator | Saturday 01 November 2025 13:00:46 +0000 (0:00:06.232) 0:02:12.573 ***** 2025-11-01 13:01:01.087827 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:01:01.087834 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:01:01.087842 | orchestrator | 2025-11-01 13:01:01.087850 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-11-01 13:01:01.087858 | orchestrator | Saturday 01 November 2025 13:00:53 +0000 (0:00:07.047) 0:02:19.620 ***** 2025-11-01 13:01:01.087866 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:01:01.087874 | orchestrator | 2025-11-01 13:01:01.087882 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-11-01 13:01:01.087889 | orchestrator | Saturday 01 November 2025 13:00:54 +0000 (0:00:00.168) 0:02:19.788 ***** 2025-11-01 13:01:01.087897 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:01:01.087905 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:01:01.087913 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:01:01.087921 | orchestrator | 2025-11-01 13:01:01.087928 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-11-01 13:01:01.087936 | orchestrator | Saturday 01 November 2025 13:00:54 +0000 (0:00:00.795) 0:02:20.584 ***** 2025-11-01 13:01:01.087944 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.087952 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.087960 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:01:01.087968 | orchestrator | 2025-11-01 13:01:01.087976 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-11-01 13:01:01.087984 | orchestrator | Saturday 01 November 2025 13:00:55 +0000 (0:00:00.642) 0:02:21.226 ***** 2025-11-01 13:01:01.087991 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:01:01.087999 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:01:01.088007 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:01:01.088015 | orchestrator | 2025-11-01 13:01:01.088023 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-11-01 13:01:01.088031 | orchestrator | Saturday 01 November 2025 13:00:56 +0000 (0:00:00.797) 0:02:22.023 ***** 2025-11-01 13:01:01.088039 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:01:01.088046 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:01:01.088054 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:01:01.088062 | orchestrator | 2025-11-01 13:01:01.088070 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-11-01 13:01:01.088078 | orchestrator | Saturday 01 November 2025 13:00:57 +0000 (0:00:00.836) 0:02:22.859 ***** 2025-11-01 13:01:01.088086 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:01:01.088093 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:01:01.088101 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:01:01.088109 | orchestrator | 2025-11-01 13:01:01.088117 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-11-01 13:01:01.088125 | orchestrator | Saturday 01 November 2025 13:00:57 +0000 (0:00:00.810) 0:02:23.670 ***** 2025-11-01 13:01:01.088133 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:01:01.088141 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:01:01.088148 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:01:01.088156 | orchestrator | 2025-11-01 13:01:01.088164 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:01:01.088172 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-11-01 13:01:01.088180 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-11-01 13:01:01.088188 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-11-01 13:01:01.088216 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:01:01.088224 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:01:01.088232 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:01:01.088240 | orchestrator | 2025-11-01 13:01:01.088248 | orchestrator | 2025-11-01 13:01:01.088256 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:01:01.088264 | orchestrator | Saturday 01 November 2025 13:00:58 +0000 (0:00:00.953) 0:02:24.623 ***** 2025-11-01 13:01:01.088272 | orchestrator | =============================================================================== 2025-11-01 13:01:01.088280 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 29.19s 2025-11-01 13:01:01.088288 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 22.30s 2025-11-01 13:01:01.088296 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.83s 2025-11-01 13:01:01.088303 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.34s 2025-11-01 13:01:01.088311 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.77s 2025-11-01 13:01:01.088344 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.98s 2025-11-01 13:01:01.088352 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.94s 2025-11-01 13:01:01.088364 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.08s 2025-11-01 13:01:01.088372 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.95s 2025-11-01 13:01:01.088380 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.90s 2025-11-01 13:01:01.088388 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.66s 2025-11-01 13:01:01.088396 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.25s 2025-11-01 13:01:01.088404 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.94s 2025-11-01 13:01:01.088412 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.92s 2025-11-01 13:01:01.088419 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.88s 2025-11-01 13:01:01.088427 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.85s 2025-11-01 13:01:01.088435 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.55s 2025-11-01 13:01:01.088443 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.51s 2025-11-01 13:01:01.088450 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.47s 2025-11-01 13:01:01.088458 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.45s 2025-11-01 13:01:01.088470 | orchestrator | 2025-11-01 13:01:01 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:01:01.088478 | orchestrator | 2025-11-01 13:01:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:01:04.134389 | orchestrator | 2025-11-01 13:01:04 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:01:04.135380 | orchestrator | 2025-11-01 13:01:04 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:01:04.135415 | orchestrator | 2025-11-01 13:01:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:01:07.172837 | orchestrator | 2025-11-01 13:01:07 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:01:07.173329 | orchestrator | 2025-11-01 13:01:07 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:01:07.173389 | orchestrator | 2025-11-01 13:01:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:01:10.218257 | orchestrator | 2025-11-01 13:01:10 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:01:10.219818 | orchestrator | 2025-11-01 13:01:10 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:01:10.219852 | orchestrator | 2025-11-01 13:01:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:01:13.272284 | orchestrator | 2025-11-01 13:01:13 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:01:13.273171 | orchestrator | 2025-11-01 13:01:13 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:01:13.273233 | orchestrator | 2025-11-01 13:01:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:01:16.323765 | orchestrator | 2025-11-01 13:01:16 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:01:16.324442 | orchestrator | 2025-11-01 13:01:16 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:01:16.324473 | orchestrator | 2025-11-01 13:01:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:01:19.367530 | orchestrator | 2025-11-01 13:01:19 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:01:19.368070 | orchestrator | 2025-11-01 13:01:19 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:01:19.368441 | orchestrator | 2025-11-01 13:01:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:01:22.424170 | orchestrator | 2025-11-01 13:01:22 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:01:22.425868 | orchestrator | 2025-11-01 13:01:22 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:01:22.426234 | orchestrator | 2025-11-01 13:01:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:01:25.473595 | orchestrator | 2025-11-01 13:01:25 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:01:25.473849 | orchestrator | 2025-11-01 13:01:25 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:01:25.473871 | orchestrator | 2025-11-01 13:01:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:01:28.519451 | orchestrator | 2025-11-01 13:01:28 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:01:28.520983 | orchestrator | 2025-11-01 13:01:28 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:01:28.521010 | orchestrator | 2025-11-01 13:01:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:01:31.558720 | orchestrator | 2025-11-01 13:01:31 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:01:31.559383 | orchestrator | 2025-11-01 13:01:31 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:01:31.559420 | orchestrator | 2025-11-01 13:01:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:01:34.603024 | orchestrator | 2025-11-01 13:01:34 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:01:34.604156 | orchestrator | 2025-11-01 13:01:34 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:01:34.604186 | orchestrator | 2025-11-01 13:01:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:01:37.662750 | orchestrator | 2025-11-01 13:01:37 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:01:37.663664 | orchestrator | 2025-11-01 13:01:37 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:01:37.663710 | orchestrator | 2025-11-01 13:01:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:01:40.705261 | orchestrator | 2025-11-01 13:01:40 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:01:40.708057 | orchestrator | 2025-11-01 13:01:40 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:01:40.708104 | orchestrator | 2025-11-01 13:01:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:01:43.759594 | orchestrator | 2025-11-01 13:01:43 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:01:43.762498 | orchestrator | 2025-11-01 13:01:43 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:01:43.762533 | orchestrator | 2025-11-01 13:01:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:01:46.805316 | orchestrator | 2025-11-01 13:01:46 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:01:46.807447 | orchestrator | 2025-11-01 13:01:46 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:01:46.807518 | orchestrator | 2025-11-01 13:01:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:01:49.849448 | orchestrator | 2025-11-01 13:01:49 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:01:49.851502 | orchestrator | 2025-11-01 13:01:49 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:01:49.852103 | orchestrator | 2025-11-01 13:01:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:01:52.897293 | orchestrator | 2025-11-01 13:01:52 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:01:52.897870 | orchestrator | 2025-11-01 13:01:52 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:01:52.898076 | orchestrator | 2025-11-01 13:01:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:01:55.942629 | orchestrator | 2025-11-01 13:01:55 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:01:55.945766 | orchestrator | 2025-11-01 13:01:55 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:01:55.945793 | orchestrator | 2025-11-01 13:01:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:01:59.013009 | orchestrator | 2025-11-01 13:01:59 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:01:59.017811 | orchestrator | 2025-11-01 13:01:59 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:01:59.017844 | orchestrator | 2025-11-01 13:01:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:02:02.058988 | orchestrator | 2025-11-01 13:02:02 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:02:02.061729 | orchestrator | 2025-11-01 13:02:02 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:02:02.061962 | orchestrator | 2025-11-01 13:02:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:02:05.115714 | orchestrator | 2025-11-01 13:02:05 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:02:05.119403 | orchestrator | 2025-11-01 13:02:05 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:02:05.120739 | orchestrator | 2025-11-01 13:02:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:02:08.161995 | orchestrator | 2025-11-01 13:02:08 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:02:08.163283 | orchestrator | 2025-11-01 13:02:08 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:02:08.163311 | orchestrator | 2025-11-01 13:02:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:02:11.214351 | orchestrator | 2025-11-01 13:02:11 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:02:11.214840 | orchestrator | 2025-11-01 13:02:11 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:02:11.214978 | orchestrator | 2025-11-01 13:02:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:02:14.267561 | orchestrator | 2025-11-01 13:02:14 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:02:14.267774 | orchestrator | 2025-11-01 13:02:14 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:02:14.267889 | orchestrator | 2025-11-01 13:02:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:02:17.318508 | orchestrator | 2025-11-01 13:02:17 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:02:17.319929 | orchestrator | 2025-11-01 13:02:17 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:02:17.320085 | orchestrator | 2025-11-01 13:02:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:02:20.366401 | orchestrator | 2025-11-01 13:02:20 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:02:20.369177 | orchestrator | 2025-11-01 13:02:20 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:02:20.369392 | orchestrator | 2025-11-01 13:02:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:02:23.425606 | orchestrator | 2025-11-01 13:02:23 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:02:23.428239 | orchestrator | 2025-11-01 13:02:23 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:02:23.428269 | orchestrator | 2025-11-01 13:02:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:02:26.478349 | orchestrator | 2025-11-01 13:02:26 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:02:26.479727 | orchestrator | 2025-11-01 13:02:26 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:02:26.480028 | orchestrator | 2025-11-01 13:02:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:02:29.524641 | orchestrator | 2025-11-01 13:02:29 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:02:29.526749 | orchestrator | 2025-11-01 13:02:29 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:02:29.526775 | orchestrator | 2025-11-01 13:02:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:02:32.575694 | orchestrator | 2025-11-01 13:02:32 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:02:32.576810 | orchestrator | 2025-11-01 13:02:32 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:02:32.576835 | orchestrator | 2025-11-01 13:02:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:02:35.619702 | orchestrator | 2025-11-01 13:02:35 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:02:35.621882 | orchestrator | 2025-11-01 13:02:35 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:02:35.621936 | orchestrator | 2025-11-01 13:02:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:02:38.668945 | orchestrator | 2025-11-01 13:02:38 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:02:38.669032 | orchestrator | 2025-11-01 13:02:38 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:02:38.669057 | orchestrator | 2025-11-01 13:02:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:02:41.719144 | orchestrator | 2025-11-01 13:02:41 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:02:41.721331 | orchestrator | 2025-11-01 13:02:41 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:02:41.721360 | orchestrator | 2025-11-01 13:02:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:02:44.770805 | orchestrator | 2025-11-01 13:02:44 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:02:44.772527 | orchestrator | 2025-11-01 13:02:44 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:02:44.772552 | orchestrator | 2025-11-01 13:02:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:02:47.812359 | orchestrator | 2025-11-01 13:02:47 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:02:47.813701 | orchestrator | 2025-11-01 13:02:47 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:02:47.813729 | orchestrator | 2025-11-01 13:02:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:02:50.866096 | orchestrator | 2025-11-01 13:02:50 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:02:50.866569 | orchestrator | 2025-11-01 13:02:50 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:02:50.866776 | orchestrator | 2025-11-01 13:02:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:02:53.919631 | orchestrator | 2025-11-01 13:02:53 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:02:53.919715 | orchestrator | 2025-11-01 13:02:53 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:02:53.919729 | orchestrator | 2025-11-01 13:02:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:02:56.960485 | orchestrator | 2025-11-01 13:02:56 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:02:56.962916 | orchestrator | 2025-11-01 13:02:56 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:02:56.962949 | orchestrator | 2025-11-01 13:02:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:03:00.006918 | orchestrator | 2025-11-01 13:03:00 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:03:00.010284 | orchestrator | 2025-11-01 13:03:00 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:03:00.011281 | orchestrator | 2025-11-01 13:03:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:03:03.053973 | orchestrator | 2025-11-01 13:03:03 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:03:03.054557 | orchestrator | 2025-11-01 13:03:03 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:03:03.054788 | orchestrator | 2025-11-01 13:03:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:03:06.103080 | orchestrator | 2025-11-01 13:03:06 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:03:06.103822 | orchestrator | 2025-11-01 13:03:06 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:03:06.103852 | orchestrator | 2025-11-01 13:03:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:03:09.144287 | orchestrator | 2025-11-01 13:03:09 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:03:09.148129 | orchestrator | 2025-11-01 13:03:09 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:03:09.148155 | orchestrator | 2025-11-01 13:03:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:03:12.196404 | orchestrator | 2025-11-01 13:03:12 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:03:12.198972 | orchestrator | 2025-11-01 13:03:12 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:03:12.199039 | orchestrator | 2025-11-01 13:03:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:03:15.239522 | orchestrator | 2025-11-01 13:03:15 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:03:15.240842 | orchestrator | 2025-11-01 13:03:15 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:03:15.240870 | orchestrator | 2025-11-01 13:03:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:03:18.290310 | orchestrator | 2025-11-01 13:03:18 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:03:18.292364 | orchestrator | 2025-11-01 13:03:18 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:03:18.292675 | orchestrator | 2025-11-01 13:03:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:03:21.339713 | orchestrator | 2025-11-01 13:03:21 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:03:21.342077 | orchestrator | 2025-11-01 13:03:21 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:03:21.343717 | orchestrator | 2025-11-01 13:03:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:03:24.386771 | orchestrator | 2025-11-01 13:03:24 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:03:24.387842 | orchestrator | 2025-11-01 13:03:24 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:03:24.387870 | orchestrator | 2025-11-01 13:03:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:03:27.435629 | orchestrator | 2025-11-01 13:03:27 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:03:27.437193 | orchestrator | 2025-11-01 13:03:27 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:03:27.437271 | orchestrator | 2025-11-01 13:03:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:03:30.483714 | orchestrator | 2025-11-01 13:03:30 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:03:30.484589 | orchestrator | 2025-11-01 13:03:30 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:03:30.484617 | orchestrator | 2025-11-01 13:03:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:03:33.525830 | orchestrator | 2025-11-01 13:03:33 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:03:33.526346 | orchestrator | 2025-11-01 13:03:33 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:03:33.526695 | orchestrator | 2025-11-01 13:03:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:03:36.601187 | orchestrator | 2025-11-01 13:03:36 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:03:36.602962 | orchestrator | 2025-11-01 13:03:36 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:03:36.603282 | orchestrator | 2025-11-01 13:03:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:03:39.643098 | orchestrator | 2025-11-01 13:03:39 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:03:39.645329 | orchestrator | 2025-11-01 13:03:39 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:03:39.645711 | orchestrator | 2025-11-01 13:03:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:03:42.687472 | orchestrator | 2025-11-01 13:03:42 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:03:42.688251 | orchestrator | 2025-11-01 13:03:42 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:03:42.688492 | orchestrator | 2025-11-01 13:03:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:03:45.737031 | orchestrator | 2025-11-01 13:03:45 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:03:45.741243 | orchestrator | 2025-11-01 13:03:45 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:03:45.741276 | orchestrator | 2025-11-01 13:03:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:03:48.792067 | orchestrator | 2025-11-01 13:03:48 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:03:48.792154 | orchestrator | 2025-11-01 13:03:48 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:03:48.792622 | orchestrator | 2025-11-01 13:03:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:03:51.846960 | orchestrator | 2025-11-01 13:03:51 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:03:51.848152 | orchestrator | 2025-11-01 13:03:51 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:03:51.848336 | orchestrator | 2025-11-01 13:03:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:03:54.898670 | orchestrator | 2025-11-01 13:03:54 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:03:54.899583 | orchestrator | 2025-11-01 13:03:54 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:03:54.899957 | orchestrator | 2025-11-01 13:03:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:03:57.943965 | orchestrator | 2025-11-01 13:03:57 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:03:57.946250 | orchestrator | 2025-11-01 13:03:57 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:03:57.946278 | orchestrator | 2025-11-01 13:03:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:04:00.989044 | orchestrator | 2025-11-01 13:04:00 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:04:00.990752 | orchestrator | 2025-11-01 13:04:00 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:04:00.991050 | orchestrator | 2025-11-01 13:04:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:04:04.035068 | orchestrator | 2025-11-01 13:04:04 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:04:04.036484 | orchestrator | 2025-11-01 13:04:04 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:04:04.036556 | orchestrator | 2025-11-01 13:04:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:04:07.074149 | orchestrator | 2025-11-01 13:04:07 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:04:07.077455 | orchestrator | 2025-11-01 13:04:07 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:04:07.077487 | orchestrator | 2025-11-01 13:04:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:04:10.137766 | orchestrator | 2025-11-01 13:04:10 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:04:10.140247 | orchestrator | 2025-11-01 13:04:10 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:04:10.140459 | orchestrator | 2025-11-01 13:04:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:04:13.187447 | orchestrator | 2025-11-01 13:04:13 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:04:13.189796 | orchestrator | 2025-11-01 13:04:13 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:04:13.189824 | orchestrator | 2025-11-01 13:04:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:04:16.240151 | orchestrator | 2025-11-01 13:04:16 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:04:16.241422 | orchestrator | 2025-11-01 13:04:16 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:04:16.243753 | orchestrator | 2025-11-01 13:04:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:04:19.296709 | orchestrator | 2025-11-01 13:04:19 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:04:19.298110 | orchestrator | 2025-11-01 13:04:19 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:04:19.298142 | orchestrator | 2025-11-01 13:04:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:04:22.346363 | orchestrator | 2025-11-01 13:04:22 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:04:22.350317 | orchestrator | 2025-11-01 13:04:22 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:04:22.351378 | orchestrator | 2025-11-01 13:04:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:04:25.389710 | orchestrator | 2025-11-01 13:04:25 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:04:25.392483 | orchestrator | 2025-11-01 13:04:25 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:04:25.393371 | orchestrator | 2025-11-01 13:04:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:04:28.440571 | orchestrator | 2025-11-01 13:04:28 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:04:28.442169 | orchestrator | 2025-11-01 13:04:28 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:04:28.442238 | orchestrator | 2025-11-01 13:04:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:04:31.488322 | orchestrator | 2025-11-01 13:04:31 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:04:31.489840 | orchestrator | 2025-11-01 13:04:31 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state STARTED 2025-11-01 13:04:31.489865 | orchestrator | 2025-11-01 13:04:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:04:34.536507 | orchestrator | 2025-11-01 13:04:34 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:04:34.548609 | orchestrator | 2025-11-01 13:04:34 | INFO  | Task 602027a6-dc6e-487d-b42d-dde6965fa1d8 is in state SUCCESS 2025-11-01 13:04:34.550989 | orchestrator | 2025-11-01 13:04:34.551021 | orchestrator | 2025-11-01 13:04:34.551033 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:04:34.551045 | orchestrator | 2025-11-01 13:04:34.551056 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:04:34.551068 | orchestrator | Saturday 01 November 2025 12:57:08 +0000 (0:00:00.692) 0:00:00.692 ***** 2025-11-01 13:04:34.551079 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:04:34.551091 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:04:34.551101 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:04:34.551112 | orchestrator | 2025-11-01 13:04:34.551123 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:04:34.551134 | orchestrator | Saturday 01 November 2025 12:57:09 +0000 (0:00:00.673) 0:00:01.366 ***** 2025-11-01 13:04:34.551146 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-11-01 13:04:34.551156 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-11-01 13:04:34.551167 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-11-01 13:04:34.551178 | orchestrator | 2025-11-01 13:04:34.551188 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-11-01 13:04:34.551268 | orchestrator | 2025-11-01 13:04:34.551282 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-11-01 13:04:34.551295 | orchestrator | Saturday 01 November 2025 12:57:10 +0000 (0:00:01.081) 0:00:02.448 ***** 2025-11-01 13:04:34.551319 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.551331 | orchestrator | 2025-11-01 13:04:34.551342 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-11-01 13:04:34.551353 | orchestrator | Saturday 01 November 2025 12:57:12 +0000 (0:00:01.558) 0:00:04.006 ***** 2025-11-01 13:04:34.551363 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:04:34.551374 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:04:34.551385 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:04:34.551396 | orchestrator | 2025-11-01 13:04:34.551407 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-11-01 13:04:34.551418 | orchestrator | Saturday 01 November 2025 12:57:14 +0000 (0:00:01.971) 0:00:05.977 ***** 2025-11-01 13:04:34.551429 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.551440 | orchestrator | 2025-11-01 13:04:34.551451 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-11-01 13:04:34.551461 | orchestrator | Saturday 01 November 2025 12:57:15 +0000 (0:00:01.692) 0:00:07.670 ***** 2025-11-01 13:04:34.551473 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:04:34.551483 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:04:34.551494 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:04:34.551505 | orchestrator | 2025-11-01 13:04:34.551516 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-11-01 13:04:34.551527 | orchestrator | Saturday 01 November 2025 12:57:16 +0000 (0:00:01.211) 0:00:08.881 ***** 2025-11-01 13:04:34.551538 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-11-01 13:04:34.551549 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-11-01 13:04:34.551559 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-11-01 13:04:34.551570 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-11-01 13:04:34.551581 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-11-01 13:04:34.551593 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-11-01 13:04:34.551619 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-11-01 13:04:34.551632 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-11-01 13:04:34.551644 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-11-01 13:04:34.551657 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-11-01 13:04:34.551670 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-11-01 13:04:34.551683 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-11-01 13:04:34.551696 | orchestrator | 2025-11-01 13:04:34.551709 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-11-01 13:04:34.551721 | orchestrator | Saturday 01 November 2025 12:57:22 +0000 (0:00:05.328) 0:00:14.209 ***** 2025-11-01 13:04:34.551734 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-11-01 13:04:34.551747 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-11-01 13:04:34.551895 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-11-01 13:04:34.551908 | orchestrator | 2025-11-01 13:04:34.551921 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-11-01 13:04:34.551934 | orchestrator | Saturday 01 November 2025 12:57:23 +0000 (0:00:00.883) 0:00:15.093 ***** 2025-11-01 13:04:34.551947 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-11-01 13:04:34.551958 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-11-01 13:04:34.551969 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-11-01 13:04:34.551980 | orchestrator | 2025-11-01 13:04:34.551990 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-11-01 13:04:34.552001 | orchestrator | Saturday 01 November 2025 12:57:25 +0000 (0:00:02.559) 0:00:17.653 ***** 2025-11-01 13:04:34.552012 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-11-01 13:04:34.552023 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.552046 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-11-01 13:04:34.552058 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.552069 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-11-01 13:04:34.552080 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.552091 | orchestrator | 2025-11-01 13:04:34.552101 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-11-01 13:04:34.552112 | orchestrator | Saturday 01 November 2025 12:57:27 +0000 (0:00:01.345) 0:00:18.998 ***** 2025-11-01 13:04:34.552126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-11-01 13:04:34.552149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-11-01 13:04:34.552161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-11-01 13:04:34.552181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 13:04:34.552192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 13:04:34.552231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 13:04:34.552244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 13:04:34.552262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 13:04:34.552273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 13:04:34.552296 | orchestrator | 2025-11-01 13:04:34.552307 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-11-01 13:04:34.552318 | orchestrator | Saturday 01 November 2025 12:57:29 +0000 (0:00:02.557) 0:00:21.555 ***** 2025-11-01 13:04:34.552329 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.552340 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.552351 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.552361 | orchestrator | 2025-11-01 13:04:34.552372 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-11-01 13:04:34.552383 | orchestrator | Saturday 01 November 2025 12:57:32 +0000 (0:00:03.013) 0:00:24.569 ***** 2025-11-01 13:04:34.552394 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-11-01 13:04:34.552405 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-11-01 13:04:34.552415 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-11-01 13:04:34.552426 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-11-01 13:04:34.552437 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-11-01 13:04:34.552488 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-11-01 13:04:34.552500 | orchestrator | 2025-11-01 13:04:34.552531 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-11-01 13:04:34.552542 | orchestrator | Saturday 01 November 2025 12:57:36 +0000 (0:00:04.098) 0:00:28.668 ***** 2025-11-01 13:04:34.552553 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.552564 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.552575 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.552586 | orchestrator | 2025-11-01 13:04:34.552596 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-11-01 13:04:34.552607 | orchestrator | Saturday 01 November 2025 12:57:38 +0000 (0:00:02.242) 0:00:30.910 ***** 2025-11-01 13:04:34.552618 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:04:34.552676 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:04:34.552688 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:04:34.552699 | orchestrator | 2025-11-01 13:04:34.552710 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-11-01 13:04:34.552721 | orchestrator | Saturday 01 November 2025 12:57:42 +0000 (0:00:03.757) 0:00:34.668 ***** 2025-11-01 13:04:34.552732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.552764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.552777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.552801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a193fde22e57cc236d45a538122a8cde94bf1c72', '__omit_place_holder__a193fde22e57cc236d45a538122a8cde94bf1c72'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-01 13:04:34.552814 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.552826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.552837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.552849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.552861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a193fde22e57cc236d45a538122a8cde94bf1c72', '__omit_place_holder__a193fde22e57cc236d45a538122a8cde94bf1c72'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-01 13:04:34.552872 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.552891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.552915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.552927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.552939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a193fde22e57cc236d45a538122a8cde94bf1c72', '__omit_place_holder__a193fde22e57cc236d45a538122a8cde94bf1c72'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-01 13:04:34.552950 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.552961 | orchestrator | 2025-11-01 13:04:34.552972 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-11-01 13:04:34.552983 | orchestrator | Saturday 01 November 2025 12:57:43 +0000 (0:00:00.840) 0:00:35.509 ***** 2025-11-01 13:04:34.552995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-11-01 13:04:34.553007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-11-01 13:04:34.553026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-11-01 13:04:34.553043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 13:04:34.553060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 13:04:34.553072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.553083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.553095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a193fde22e57cc236d45a538122a8cde94bf1c72', '__omit_place_holder__a193fde22e57cc236d45a538122a8cde94bf1c72'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-01 13:04:34.553107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a193fde22e57cc236d45a538122a8cde94bf1c72', '__omit_place_holder__a193fde22e57cc236d45a538122a8cde94bf1c72'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-01 13:04:34.553130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 13:04:34.553152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.553165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a193fde22e57cc236d45a538122a8cde94bf1c72', '__omit_place_holder__a193fde22e57cc236d45a538122a8cde94bf1c72'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-11-01 13:04:34.553176 | orchestrator | 2025-11-01 13:04:34.553187 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-11-01 13:04:34.553198 | orchestrator | Saturday 01 November 2025 12:57:48 +0000 (0:00:05.383) 0:00:40.892 ***** 2025-11-01 13:04:34.553230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-11-01 13:04:34.553241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-11-01 13:04:34.553253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 13:04:34.553278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-11-01 13:04:34.553290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 13:04:34.553307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 13:04:34.553319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 13:04:34.553330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 13:04:34.553342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 13:04:34.553353 | orchestrator | 2025-11-01 13:04:34.553364 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-11-01 13:04:34.553375 | orchestrator | Saturday 01 November 2025 12:57:54 +0000 (0:00:05.335) 0:00:46.227 ***** 2025-11-01 13:04:34.553386 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-11-01 13:04:34.553404 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-11-01 13:04:34.553415 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-11-01 13:04:34.553425 | orchestrator | 2025-11-01 13:04:34.553436 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-11-01 13:04:34.553447 | orchestrator | Saturday 01 November 2025 12:57:57 +0000 (0:00:03.188) 0:00:49.416 ***** 2025-11-01 13:04:34.553458 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-11-01 13:04:34.553468 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-11-01 13:04:34.553479 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-11-01 13:04:34.553490 | orchestrator | 2025-11-01 13:04:34.554309 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-11-01 13:04:34.554340 | orchestrator | Saturday 01 November 2025 12:58:05 +0000 (0:00:08.045) 0:00:57.462 ***** 2025-11-01 13:04:34.554352 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.554363 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.554374 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.554385 | orchestrator | 2025-11-01 13:04:34.554396 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-11-01 13:04:34.554407 | orchestrator | Saturday 01 November 2025 12:58:07 +0000 (0:00:01.746) 0:00:59.209 ***** 2025-11-01 13:04:34.554418 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-11-01 13:04:34.554430 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-11-01 13:04:34.554441 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-11-01 13:04:34.554452 | orchestrator | 2025-11-01 13:04:34.554463 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-11-01 13:04:34.554474 | orchestrator | Saturday 01 November 2025 12:58:11 +0000 (0:00:03.857) 0:01:03.066 ***** 2025-11-01 13:04:34.554493 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-11-01 13:04:34.554505 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-11-01 13:04:34.554515 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-11-01 13:04:34.554526 | orchestrator | 2025-11-01 13:04:34.554537 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-11-01 13:04:34.554856 | orchestrator | Saturday 01 November 2025 12:58:14 +0000 (0:00:03.014) 0:01:06.080 ***** 2025-11-01 13:04:34.554867 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-11-01 13:04:34.554879 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-11-01 13:04:34.554889 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-11-01 13:04:34.554900 | orchestrator | 2025-11-01 13:04:34.554911 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-11-01 13:04:34.554922 | orchestrator | Saturday 01 November 2025 12:58:16 +0000 (0:00:01.886) 0:01:07.967 ***** 2025-11-01 13:04:34.554932 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-11-01 13:04:34.554943 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-11-01 13:04:34.554954 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-11-01 13:04:34.554965 | orchestrator | 2025-11-01 13:04:34.554975 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-11-01 13:04:34.554986 | orchestrator | Saturday 01 November 2025 12:58:18 +0000 (0:00:02.051) 0:01:10.018 ***** 2025-11-01 13:04:34.555009 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.555020 | orchestrator | 2025-11-01 13:04:34.556782 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-11-01 13:04:34.556797 | orchestrator | Saturday 01 November 2025 12:58:19 +0000 (0:00:01.397) 0:01:11.416 ***** 2025-11-01 13:04:34.556808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-11-01 13:04:34.556820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-11-01 13:04:34.556841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-11-01 13:04:34.556852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 13:04:34.556868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 13:04:34.556879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 13:04:34.556900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 13:04:34.556910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 13:04:34.556920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 13:04:34.556930 | orchestrator | 2025-11-01 13:04:34.556940 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-11-01 13:04:34.556950 | orchestrator | Saturday 01 November 2025 12:58:24 +0000 (0:00:04.710) 0:01:16.127 ***** 2025-11-01 13:04:34.557072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.557085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.557096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.557106 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.557123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.557133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.557143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.557153 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.557164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.557289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.557311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.557323 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.557335 | orchestrator | 2025-11-01 13:04:34.557346 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-11-01 13:04:34.557357 | orchestrator | Saturday 01 November 2025 12:58:24 +0000 (0:00:00.709) 0:01:16.836 ***** 2025-11-01 13:04:34.557376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.557388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.557400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.557411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.557423 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.557442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.557455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.557470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.557488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.557500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.557512 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.557523 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.557534 | orchestrator | 2025-11-01 13:04:34.557545 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-11-01 13:04:34.557556 | orchestrator | Saturday 01 November 2025 12:58:26 +0000 (0:00:01.159) 0:01:17.996 ***** 2025-11-01 13:04:34.557568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.557586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.557598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.557609 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.557625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.557642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.557652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.557662 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.557672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.557682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.557698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.557709 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.557719 | orchestrator | 2025-11-01 13:04:34.557728 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-11-01 13:04:34.557738 | orchestrator | Saturday 01 November 2025 12:58:27 +0000 (0:00:00.988) 0:01:18.984 ***** 2025-11-01 13:04:34.557748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.557772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.557782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.557793 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.557803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.557813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.557823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.557833 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.557848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.557868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.557879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.557889 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.557899 | orchestrator | 2025-11-01 13:04:34.557909 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-11-01 13:04:34.557918 | orchestrator | Saturday 01 November 2025 12:58:27 +0000 (0:00:00.775) 0:01:19.760 ***** 2025-11-01 13:04:34.557929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.557939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.557949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.557959 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.557975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.557991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.558005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.558226 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.558245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.558255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.558265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.558276 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.558285 | orchestrator | 2025-11-01 13:04:34.558295 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-11-01 13:04:34.558305 | orchestrator | Saturday 01 November 2025 12:58:28 +0000 (0:00:00.932) 0:01:20.693 ***** 2025-11-01 13:04:34.558315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.558339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.558355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.558365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.558376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.558386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.558396 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.558406 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.558416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.558436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.558447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.558456 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.558466 | orchestrator | 2025-11-01 13:04:34.558476 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-11-01 13:04:34.558486 | orchestrator | Saturday 01 November 2025 12:58:30 +0000 (0:00:01.458) 0:01:22.151 ***** 2025-11-01 13:04:34.558500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.558511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.558521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.558531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.558547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.558557 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.558575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.558586 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.558612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.558627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.558638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.558648 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.558658 | orchestrator | 2025-11-01 13:04:34.558668 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-11-01 13:04:34.558677 | orchestrator | Saturday 01 November 2025 12:58:31 +0000 (0:00:00.968) 0:01:23.119 ***** 2025-11-01 13:04:34.558688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.558703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.558714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.558724 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.558740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.558780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.558792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.558801 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.558812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-11-01 13:04:34.558822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-11-01 13:04:34.558839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-11-01 13:04:34.558850 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.558859 | orchestrator | 2025-11-01 13:04:34.558869 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-11-01 13:04:34.558879 | orchestrator | Saturday 01 November 2025 12:58:32 +0000 (0:00:01.254) 0:01:24.373 ***** 2025-11-01 13:04:34.558888 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-11-01 13:04:34.558899 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-11-01 13:04:34.558914 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-11-01 13:04:34.558924 | orchestrator | 2025-11-01 13:04:34.558934 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-11-01 13:04:34.558943 | orchestrator | Saturday 01 November 2025 12:58:35 +0000 (0:00:02.638) 0:01:27.012 ***** 2025-11-01 13:04:34.559007 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-11-01 13:04:34.559017 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-11-01 13:04:34.559027 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-11-01 13:04:34.559037 | orchestrator | 2025-11-01 13:04:34.559046 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-11-01 13:04:34.559056 | orchestrator | Saturday 01 November 2025 12:58:37 +0000 (0:00:02.186) 0:01:29.199 ***** 2025-11-01 13:04:34.559066 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-11-01 13:04:34.559075 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-11-01 13:04:34.559093 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-11-01 13:04:34.559103 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-01 13:04:34.559113 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.559122 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-01 13:04:34.559132 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.559142 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-01 13:04:34.559151 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.559161 | orchestrator | 2025-11-01 13:04:34.559171 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-11-01 13:04:34.559180 | orchestrator | Saturday 01 November 2025 12:58:38 +0000 (0:00:01.203) 0:01:30.402 ***** 2025-11-01 13:04:34.559197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-11-01 13:04:34.559226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-11-01 13:04:34.559237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-11-01 13:04:34.559253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 13:04:34.559264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 13:04:34.559279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-11-01 13:04:34.559289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 13:04:34.559306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 13:04:34.559316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-11-01 13:04:34.559326 | orchestrator | 2025-11-01 13:04:34.559336 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-11-01 13:04:34.559346 | orchestrator | Saturday 01 November 2025 12:58:42 +0000 (0:00:03.785) 0:01:34.187 ***** 2025-11-01 13:04:34.559356 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.559365 | orchestrator | 2025-11-01 13:04:34.559375 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-11-01 13:04:34.559385 | orchestrator | Saturday 01 November 2025 12:58:43 +0000 (0:00:00.809) 0:01:34.997 ***** 2025-11-01 13:04:34.559396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-11-01 13:04:34.559413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-01 13:04:34.559427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.559445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.559456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-11-01 13:04:34.559466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-01 13:04:34.559476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.559493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.559508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-11-01 13:04:34.559524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-01 13:04:34.559535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.559545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.559555 | orchestrator | 2025-11-01 13:04:34.559565 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-11-01 13:04:34.559574 | orchestrator | Saturday 01 November 2025 12:58:48 +0000 (0:00:05.478) 0:01:40.476 ***** 2025-11-01 13:04:34.559585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-11-01 13:04:34.559602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-01 13:04:34.559612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.559631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.559641 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.559652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-11-01 13:04:34.559662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-01 13:04:34.559672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.559682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.559692 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.559717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-11-01 13:04:34.559736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-11-01 13:04:34.559747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.559757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.559767 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.559777 | orchestrator | 2025-11-01 13:04:34.559787 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-11-01 13:04:34.559797 | orchestrator | Saturday 01 November 2025 12:58:50 +0000 (0:00:01.528) 0:01:42.004 ***** 2025-11-01 13:04:34.559807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-11-01 13:04:34.559818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-11-01 13:04:34.559828 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.559838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-11-01 13:04:34.559848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-11-01 13:04:34.559858 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.559868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-11-01 13:04:34.559878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-11-01 13:04:34.559888 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.559903 | orchestrator | 2025-11-01 13:04:34.559918 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-11-01 13:04:34.559928 | orchestrator | Saturday 01 November 2025 12:58:51 +0000 (0:00:01.110) 0:01:43.115 ***** 2025-11-01 13:04:34.559938 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.559948 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.559957 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.559967 | orchestrator | 2025-11-01 13:04:34.559977 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-11-01 13:04:34.559986 | orchestrator | Saturday 01 November 2025 12:58:52 +0000 (0:00:01.557) 0:01:44.672 ***** 2025-11-01 13:04:34.559996 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.560006 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.560015 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.560025 | orchestrator | 2025-11-01 13:04:34.560035 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-11-01 13:04:34.560072 | orchestrator | Saturday 01 November 2025 12:58:56 +0000 (0:00:04.193) 0:01:48.865 ***** 2025-11-01 13:04:34.560082 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.560091 | orchestrator | 2025-11-01 13:04:34.560102 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-11-01 13:04:34.560118 | orchestrator | Saturday 01 November 2025 12:58:58 +0000 (0:00:01.100) 0:01:49.966 ***** 2025-11-01 13:04:34.560141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.560159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.560177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.560195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.560285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.560401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.560414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.560425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.560435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.560451 | orchestrator | 2025-11-01 13:04:34.560461 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-11-01 13:04:34.560470 | orchestrator | Saturday 01 November 2025 12:59:04 +0000 (0:00:06.090) 0:01:56.056 ***** 2025-11-01 13:04:34.560486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.560497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.560511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.560522 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.560532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.560542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.560558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.560568 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.560583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.560594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.560621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.560632 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.560642 | orchestrator | 2025-11-01 13:04:34.560651 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-11-01 13:04:34.560661 | orchestrator | Saturday 01 November 2025 12:59:05 +0000 (0:00:00.963) 0:01:57.020 ***** 2025-11-01 13:04:34.560671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-01 13:04:34.560682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-01 13:04:34.560692 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.560702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-01 13:04:34.560717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-01 13:04:34.560727 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.560737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-01 13:04:34.560747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-11-01 13:04:34.560756 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.560766 | orchestrator | 2025-11-01 13:04:34.560776 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-11-01 13:04:34.560785 | orchestrator | Saturday 01 November 2025 12:59:06 +0000 (0:00:01.418) 0:01:58.438 ***** 2025-11-01 13:04:34.560792 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.560800 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.560808 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.560816 | orchestrator | 2025-11-01 13:04:34.560824 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-11-01 13:04:34.560832 | orchestrator | Saturday 01 November 2025 12:59:08 +0000 (0:00:01.591) 0:02:00.030 ***** 2025-11-01 13:04:34.560840 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.560847 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.560855 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.560863 | orchestrator | 2025-11-01 13:04:34.560875 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-11-01 13:04:34.560883 | orchestrator | Saturday 01 November 2025 12:59:10 +0000 (0:00:02.325) 0:02:02.356 ***** 2025-11-01 13:04:34.560891 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.560899 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.560907 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.560915 | orchestrator | 2025-11-01 13:04:34.560923 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-11-01 13:04:34.560931 | orchestrator | Saturday 01 November 2025 12:59:10 +0000 (0:00:00.367) 0:02:02.723 ***** 2025-11-01 13:04:34.560939 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.560947 | orchestrator | 2025-11-01 13:04:34.560955 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-11-01 13:04:34.560962 | orchestrator | Saturday 01 November 2025 12:59:11 +0000 (0:00:01.009) 0:02:03.733 ***** 2025-11-01 13:04:34.560974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-11-01 13:04:34.560984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-11-01 13:04:34.560998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-11-01 13:04:34.561006 | orchestrator | 2025-11-01 13:04:34.561014 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-11-01 13:04:34.561021 | orchestrator | Saturday 01 November 2025 12:59:14 +0000 (0:00:02.727) 0:02:06.461 ***** 2025-11-01 13:04:34.561034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-11-01 13:04:34.561042 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.561051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-11-01 13:04:34.561059 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.561070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-11-01 13:04:34.561084 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.561092 | orchestrator | 2025-11-01 13:04:34.561100 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-11-01 13:04:34.561108 | orchestrator | Saturday 01 November 2025 12:59:16 +0000 (0:00:01.996) 0:02:08.457 ***** 2025-11-01 13:04:34.561117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-01 13:04:34.561126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-01 13:04:34.561135 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.561144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-01 13:04:34.561152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-01 13:04:34.561160 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.561173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-01 13:04:34.561181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-11-01 13:04:34.561189 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.561197 | orchestrator | 2025-11-01 13:04:34.561219 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-11-01 13:04:34.561227 | orchestrator | Saturday 01 November 2025 12:59:19 +0000 (0:00:02.547) 0:02:11.005 ***** 2025-11-01 13:04:34.561235 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.561243 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.561251 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.561263 | orchestrator | 2025-11-01 13:04:34.561271 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-11-01 13:04:34.561282 | orchestrator | Saturday 01 November 2025 12:59:20 +0000 (0:00:00.925) 0:02:11.931 ***** 2025-11-01 13:04:34.561291 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.561299 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.561306 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.561314 | orchestrator | 2025-11-01 13:04:34.561322 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-11-01 13:04:34.561330 | orchestrator | Saturday 01 November 2025 12:59:21 +0000 (0:00:01.465) 0:02:13.397 ***** 2025-11-01 13:04:34.561338 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.561346 | orchestrator | 2025-11-01 13:04:34.561353 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-11-01 13:04:34.561361 | orchestrator | Saturday 01 November 2025 12:59:22 +0000 (0:00:00.811) 0:02:14.208 ***** 2025-11-01 13:04:34.561369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.561378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.561387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.561400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.561416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.561425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.561433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.561442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.561454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.561467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.561481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.561490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.561498 | orchestrator | 2025-11-01 13:04:34.561506 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-11-01 13:04:34.561514 | orchestrator | Saturday 01 November 2025 12:59:26 +0000 (0:00:03.983) 0:02:18.192 ***** 2025-11-01 13:04:34.561522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.561531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.561548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.561560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.561568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.561576 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.561585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.561593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.561610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.561618 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.561630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.561639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.561647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.561656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.561664 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.561672 | orchestrator | 2025-11-01 13:04:34.561679 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-11-01 13:04:34.561692 | orchestrator | Saturday 01 November 2025 12:59:27 +0000 (0:00:01.063) 0:02:19.255 ***** 2025-11-01 13:04:34.561700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-01 13:04:34.561713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-01 13:04:34.561721 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.561730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-01 13:04:34.561738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-01 13:04:34.561746 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.561754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-01 13:04:34.561765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-11-01 13:04:34.561773 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.561781 | orchestrator | 2025-11-01 13:04:34.561789 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-11-01 13:04:34.561797 | orchestrator | Saturday 01 November 2025 12:59:28 +0000 (0:00:00.997) 0:02:20.252 ***** 2025-11-01 13:04:34.561805 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.561812 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.561820 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.561828 | orchestrator | 2025-11-01 13:04:34.561836 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-11-01 13:04:34.561844 | orchestrator | Saturday 01 November 2025 12:59:29 +0000 (0:00:01.343) 0:02:21.596 ***** 2025-11-01 13:04:34.561851 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.561859 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.561867 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.561875 | orchestrator | 2025-11-01 13:04:34.561883 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-11-01 13:04:34.561891 | orchestrator | Saturday 01 November 2025 12:59:31 +0000 (0:00:02.187) 0:02:23.783 ***** 2025-11-01 13:04:34.561898 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.561906 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.561914 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.561922 | orchestrator | 2025-11-01 13:04:34.561929 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-11-01 13:04:34.561937 | orchestrator | Saturday 01 November 2025 12:59:32 +0000 (0:00:00.578) 0:02:24.361 ***** 2025-11-01 13:04:34.561945 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.561953 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.561961 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.561968 | orchestrator | 2025-11-01 13:04:34.561976 | orchestrator | TASK [include_role : designate] ************************************************ 2025-11-01 13:04:34.561984 | orchestrator | Saturday 01 November 2025 12:59:32 +0000 (0:00:00.388) 0:02:24.750 ***** 2025-11-01 13:04:34.561992 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.562000 | orchestrator | 2025-11-01 13:04:34.562008 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-11-01 13:04:34.562055 | orchestrator | Saturday 01 November 2025 12:59:33 +0000 (0:00:00.883) 0:02:25.634 ***** 2025-11-01 13:04:34.562065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 13:04:34.562079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 13:04:34.562088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 13:04:34.562152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 13:04:34.562165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 13:04:34.562174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 13:04:34.562195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562297 | orchestrator | 2025-11-01 13:04:34.562305 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-11-01 13:04:34.562313 | orchestrator | Saturday 01 November 2025 12:59:38 +0000 (0:00:04.433) 0:02:30.067 ***** 2025-11-01 13:04:34.562326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 13:04:34.562339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 13:04:34.562348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 13:04:34.562394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 13:04:34.562414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562427 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.562436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 13:04:34.562485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 13:04:34.562506 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.562515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.562564 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.562580 | orchestrator | 2025-11-01 13:04:34.562588 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-11-01 13:04:34.562596 | orchestrator | Saturday 01 November 2025 12:59:39 +0000 (0:00:01.016) 0:02:31.084 ***** 2025-11-01 13:04:34.562604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-11-01 13:04:34.562612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-11-01 13:04:34.562620 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.562628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-11-01 13:04:34.562636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-11-01 13:04:34.562644 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.562652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-11-01 13:04:34.562660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-11-01 13:04:34.562668 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.562697 | orchestrator | 2025-11-01 13:04:34.562706 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-11-01 13:04:34.562714 | orchestrator | Saturday 01 November 2025 12:59:40 +0000 (0:00:01.113) 0:02:32.198 ***** 2025-11-01 13:04:34.562722 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.562730 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.562738 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.562745 | orchestrator | 2025-11-01 13:04:34.562753 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-11-01 13:04:34.562761 | orchestrator | Saturday 01 November 2025 12:59:42 +0000 (0:00:01.919) 0:02:34.117 ***** 2025-11-01 13:04:34.562769 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.562777 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.562785 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.562792 | orchestrator | 2025-11-01 13:04:34.562800 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-11-01 13:04:34.562808 | orchestrator | Saturday 01 November 2025 12:59:44 +0000 (0:00:01.901) 0:02:36.019 ***** 2025-11-01 13:04:34.562816 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.562824 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.562832 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.562840 | orchestrator | 2025-11-01 13:04:34.562847 | orchestrator | TASK [include_role : glance] *************************************************** 2025-11-01 13:04:34.562855 | orchestrator | Saturday 01 November 2025 12:59:44 +0000 (0:00:00.590) 0:02:36.610 ***** 2025-11-01 13:04:34.562863 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.562871 | orchestrator | 2025-11-01 13:04:34.562879 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-11-01 13:04:34.562887 | orchestrator | Saturday 01 November 2025 12:59:45 +0000 (0:00:00.895) 0:02:37.506 ***** 2025-11-01 13:04:34.562907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 13:04:34.562924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-01 13:04:34.562939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 13:04:34.562958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-01 13:04:34.562972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 13:04:34.562990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-01 13:04:34.563000 | orchestrator | 2025-11-01 13:04:34.563008 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-11-01 13:04:34.563016 | orchestrator | Saturday 01 November 2025 12:59:50 +0000 (0:00:05.175) 0:02:42.681 ***** 2025-11-01 13:04:34.563029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-01 13:04:34.563047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-01 13:04:34.563056 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.563065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-01 13:04:34.563091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-01 13:04:34.563100 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.563109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-01 13:04:34.563124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-11-01 13:04:34.563138 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.563146 | orchestrator | 2025-11-01 13:04:34.563154 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-11-01 13:04:34.563327 | orchestrator | Saturday 01 November 2025 12:59:54 +0000 (0:00:03.894) 0:02:46.576 ***** 2025-11-01 13:04:34.563340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-01 13:04:34.563348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-01 13:04:34.563357 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.563365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-01 13:04:34.563374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-01 13:04:34.563388 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.563397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-01 13:04:34.563411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-11-01 13:04:34.563419 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.563427 | orchestrator | 2025-11-01 13:04:34.563435 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-11-01 13:04:34.563457 | orchestrator | Saturday 01 November 2025 12:59:58 +0000 (0:00:04.175) 0:02:50.752 ***** 2025-11-01 13:04:34.563466 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.563474 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.563482 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.563489 | orchestrator | 2025-11-01 13:04:34.563497 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-11-01 13:04:34.563505 | orchestrator | Saturday 01 November 2025 13:00:00 +0000 (0:00:01.330) 0:02:52.083 ***** 2025-11-01 13:04:34.563513 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.563521 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.563529 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.563536 | orchestrator | 2025-11-01 13:04:34.563544 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-11-01 13:04:34.563555 | orchestrator | Saturday 01 November 2025 13:00:02 +0000 (0:00:02.389) 0:02:54.472 ***** 2025-11-01 13:04:34.563564 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.563571 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.563579 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.563587 | orchestrator | 2025-11-01 13:04:34.563595 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-11-01 13:04:34.563601 | orchestrator | Saturday 01 November 2025 13:00:03 +0000 (0:00:00.690) 0:02:55.163 ***** 2025-11-01 13:04:34.563608 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.563615 | orchestrator | 2025-11-01 13:04:34.563621 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-11-01 13:04:34.563628 | orchestrator | Saturday 01 November 2025 13:00:04 +0000 (0:00:01.084) 0:02:56.248 ***** 2025-11-01 13:04:34.563635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 13:04:34.563642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 13:04:34.563654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 13:04:34.563661 | orchestrator | 2025-11-01 13:04:34.563668 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-11-01 13:04:34.563675 | orchestrator | Saturday 01 November 2025 13:00:09 +0000 (0:00:04.937) 0:03:01.185 ***** 2025-11-01 13:04:34.563687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-01 13:04:34.563697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-01 13:04:34.563704 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.563711 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.563718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-01 13:04:34.563725 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.563732 | orchestrator | 2025-11-01 13:04:34.563739 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-11-01 13:04:34.563745 | orchestrator | Saturday 01 November 2025 13:00:10 +0000 (0:00:00.828) 0:03:02.014 ***** 2025-11-01 13:04:34.563752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-11-01 13:04:34.563763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-11-01 13:04:34.563771 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.563777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-11-01 13:04:34.563784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-11-01 13:04:34.563791 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.563798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-11-01 13:04:34.563804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-11-01 13:04:34.563811 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.563818 | orchestrator | 2025-11-01 13:04:34.563825 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-11-01 13:04:34.563831 | orchestrator | Saturday 01 November 2025 13:00:10 +0000 (0:00:00.897) 0:03:02.911 ***** 2025-11-01 13:04:34.563838 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.563844 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.563851 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.563857 | orchestrator | 2025-11-01 13:04:34.563864 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-11-01 13:04:34.563871 | orchestrator | Saturday 01 November 2025 13:00:12 +0000 (0:00:01.517) 0:03:04.429 ***** 2025-11-01 13:04:34.563877 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.563884 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.563891 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.563897 | orchestrator | 2025-11-01 13:04:34.563904 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-11-01 13:04:34.563910 | orchestrator | Saturday 01 November 2025 13:00:14 +0000 (0:00:02.325) 0:03:06.754 ***** 2025-11-01 13:04:34.563917 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.563924 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.563934 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.563941 | orchestrator | 2025-11-01 13:04:34.564001 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-11-01 13:04:34.564010 | orchestrator | Saturday 01 November 2025 13:00:15 +0000 (0:00:00.748) 0:03:07.503 ***** 2025-11-01 13:04:34.564016 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.564023 | orchestrator | 2025-11-01 13:04:34.564030 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-11-01 13:04:34.564036 | orchestrator | Saturday 01 November 2025 13:00:16 +0000 (0:00:01.146) 0:03:08.650 ***** 2025-11-01 13:04:34.564048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 13:04:34.564067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 13:04:34.564080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 13:04:34.564092 | orchestrator | 2025-11-01 13:04:34.564099 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-11-01 13:04:34.564105 | orchestrator | Saturday 01 November 2025 13:00:22 +0000 (0:00:05.577) 0:03:14.227 ***** 2025-11-01 13:04:34.564121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-01 13:04:34.564133 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.564140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-01 13:04:34.564148 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.564166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-01 13:04:34.564179 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.564186 | orchestrator | 2025-11-01 13:04:34.564193 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-11-01 13:04:34.564212 | orchestrator | Saturday 01 November 2025 13:00:23 +0000 (0:00:01.549) 0:03:15.777 ***** 2025-11-01 13:04:34.564220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-01 13:04:34.564227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-01 13:04:34.564234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-01 13:04:34.564242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-01 13:04:34.564249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-01 13:04:34.564256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-11-01 13:04:34.564263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-01 13:04:34.564270 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.564277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-01 13:04:34.564288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-01 13:04:34.564299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-11-01 13:04:34.564306 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.564313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-01 13:04:34.564323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-01 13:04:34.564330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-11-01 13:04:34.564337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-11-01 13:04:34.564344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-11-01 13:04:34.564350 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.564357 | orchestrator | 2025-11-01 13:04:34.564364 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-11-01 13:04:34.564370 | orchestrator | Saturday 01 November 2025 13:00:25 +0000 (0:00:01.243) 0:03:17.021 ***** 2025-11-01 13:04:34.564377 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.564384 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.564390 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.564397 | orchestrator | 2025-11-01 13:04:34.564403 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-11-01 13:04:34.564410 | orchestrator | Saturday 01 November 2025 13:00:26 +0000 (0:00:01.470) 0:03:18.491 ***** 2025-11-01 13:04:34.564417 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.564423 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.564430 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.564436 | orchestrator | 2025-11-01 13:04:34.564443 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-11-01 13:04:34.564450 | orchestrator | Saturday 01 November 2025 13:00:28 +0000 (0:00:02.271) 0:03:20.763 ***** 2025-11-01 13:04:34.564456 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.564463 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.564469 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.564476 | orchestrator | 2025-11-01 13:04:34.564482 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-11-01 13:04:34.564489 | orchestrator | Saturday 01 November 2025 13:00:29 +0000 (0:00:00.365) 0:03:21.128 ***** 2025-11-01 13:04:34.564496 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.564502 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.564509 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.564516 | orchestrator | 2025-11-01 13:04:34.564522 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-11-01 13:04:34.564529 | orchestrator | Saturday 01 November 2025 13:00:29 +0000 (0:00:00.594) 0:03:21.722 ***** 2025-11-01 13:04:34.564535 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.564547 | orchestrator | 2025-11-01 13:04:34.564554 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-11-01 13:04:34.564561 | orchestrator | Saturday 01 November 2025 13:00:30 +0000 (0:00:01.118) 0:03:22.840 ***** 2025-11-01 13:04:34.564572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:04:34.564583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 13:04:34.564591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 13:04:34.564598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:04:34.564606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 13:04:34.564618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 13:04:34.564629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:04:34.564640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 13:04:34.564648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 13:04:34.564655 | orchestrator | 2025-11-01 13:04:34.564662 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-11-01 13:04:34.564668 | orchestrator | Saturday 01 November 2025 13:00:35 +0000 (0:00:04.296) 0:03:27.137 ***** 2025-11-01 13:04:34.564676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-01 13:04:34.564690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 13:04:34.564702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 13:04:34.564785 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.564797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-01 13:04:34.564805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 13:04:34.564812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 13:04:34.564819 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.564831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-01 13:04:34.565386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 13:04:34.565407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 13:04:34.565414 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.565421 | orchestrator | 2025-11-01 13:04:34.565428 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-11-01 13:04:34.565441 | orchestrator | Saturday 01 November 2025 13:00:36 +0000 (0:00:00.985) 0:03:28.123 ***** 2025-11-01 13:04:34.565449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-01 13:04:34.565457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-01 13:04:34.565464 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.565471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-01 13:04:34.565478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-01 13:04:34.565484 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.565491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-01 13:04:34.565504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-11-01 13:04:34.565511 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.565517 | orchestrator | 2025-11-01 13:04:34.565523 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-11-01 13:04:34.565529 | orchestrator | Saturday 01 November 2025 13:00:37 +0000 (0:00:00.955) 0:03:29.078 ***** 2025-11-01 13:04:34.565536 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.565542 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.565548 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.565554 | orchestrator | 2025-11-01 13:04:34.565560 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-11-01 13:04:34.565566 | orchestrator | Saturday 01 November 2025 13:00:38 +0000 (0:00:01.410) 0:03:30.489 ***** 2025-11-01 13:04:34.565573 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.565579 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.565585 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.565591 | orchestrator | 2025-11-01 13:04:34.565597 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-11-01 13:04:34.565603 | orchestrator | Saturday 01 November 2025 13:00:40 +0000 (0:00:02.317) 0:03:32.807 ***** 2025-11-01 13:04:34.565609 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.565615 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.565621 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.565627 | orchestrator | 2025-11-01 13:04:34.565634 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-11-01 13:04:34.565640 | orchestrator | Saturday 01 November 2025 13:00:41 +0000 (0:00:00.694) 0:03:33.501 ***** 2025-11-01 13:04:34.565646 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.565652 | orchestrator | 2025-11-01 13:04:34.565658 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-11-01 13:04:34.565664 | orchestrator | Saturday 01 November 2025 13:00:42 +0000 (0:00:01.125) 0:03:34.627 ***** 2025-11-01 13:04:34.565676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:04:34.565687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.565699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:04:34.565706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.565713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:04:34.565723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.565730 | orchestrator | 2025-11-01 13:04:34.565737 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-11-01 13:04:34.565743 | orchestrator | Saturday 01 November 2025 13:00:46 +0000 (0:00:04.201) 0:03:38.829 ***** 2025-11-01 13:04:34.565756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 13:04:34.565768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.565774 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.565781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 13:04:34.565791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.565797 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.565807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 13:04:34.565818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.565825 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.565831 | orchestrator | 2025-11-01 13:04:34.565837 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-11-01 13:04:34.565844 | orchestrator | Saturday 01 November 2025 13:00:48 +0000 (0:00:01.378) 0:03:40.207 ***** 2025-11-01 13:04:34.565850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-11-01 13:04:34.565858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-11-01 13:04:34.565864 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.565870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-11-01 13:04:34.565877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-11-01 13:04:34.565883 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.565889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-11-01 13:04:34.565895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-11-01 13:04:34.565901 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.565908 | orchestrator | 2025-11-01 13:04:34.565914 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-11-01 13:04:34.565920 | orchestrator | Saturday 01 November 2025 13:00:49 +0000 (0:00:00.985) 0:03:41.192 ***** 2025-11-01 13:04:34.565926 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.565932 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.565939 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.565945 | orchestrator | 2025-11-01 13:04:34.565951 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-11-01 13:04:34.565957 | orchestrator | Saturday 01 November 2025 13:00:50 +0000 (0:00:01.314) 0:03:42.507 ***** 2025-11-01 13:04:34.565963 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.565969 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.565976 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.565982 | orchestrator | 2025-11-01 13:04:34.565989 | orchestrator | TASK [include_role : manila] *************************************************** 2025-11-01 13:04:34.565996 | orchestrator | Saturday 01 November 2025 13:00:52 +0000 (0:00:02.201) 0:03:44.708 ***** 2025-11-01 13:04:34.566006 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.566098 | orchestrator | 2025-11-01 13:04:34.566110 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-11-01 13:04:34.566118 | orchestrator | Saturday 01 November 2025 13:00:54 +0000 (0:00:01.467) 0:03:46.176 ***** 2025-11-01 13:04:34.566136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-11-01 13:04:34.566145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.566153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.566161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.566170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-11-01 13:04:34.566185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.566197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.566222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.566230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-11-01 13:04:34.566238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.566245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.566256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.566268 | orchestrator | 2025-11-01 13:04:34.566275 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-11-01 13:04:34.566283 | orchestrator | Saturday 01 November 2025 13:00:58 +0000 (0:00:03.897) 0:03:50.073 ***** 2025-11-01 13:04:34.566293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-11-01 13:04:34.566301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.566308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-11-01 13:04:34.566316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.566324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.566339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.566346 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.566352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.566362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.566369 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.566375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-11-01 13:04:34.566382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.566388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.566403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.566410 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.566416 | orchestrator | 2025-11-01 13:04:34.566423 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-11-01 13:04:34.566429 | orchestrator | Saturday 01 November 2025 13:00:58 +0000 (0:00:00.763) 0:03:50.836 ***** 2025-11-01 13:04:34.566435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-11-01 13:04:34.566442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-11-01 13:04:34.566448 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.566454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-11-01 13:04:34.566463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-11-01 13:04:34.566470 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.566476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-11-01 13:04:34.566482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-11-01 13:04:34.566505 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.566511 | orchestrator | 2025-11-01 13:04:34.566517 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-11-01 13:04:34.566524 | orchestrator | Saturday 01 November 2025 13:01:00 +0000 (0:00:01.367) 0:03:52.204 ***** 2025-11-01 13:04:34.566530 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.566536 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.566542 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.566548 | orchestrator | 2025-11-01 13:04:34.566554 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-11-01 13:04:34.566560 | orchestrator | Saturday 01 November 2025 13:01:01 +0000 (0:00:01.373) 0:03:53.577 ***** 2025-11-01 13:04:34.566567 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.566573 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.566579 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.566585 | orchestrator | 2025-11-01 13:04:34.566591 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-11-01 13:04:34.566597 | orchestrator | Saturday 01 November 2025 13:01:03 +0000 (0:00:02.251) 0:03:55.828 ***** 2025-11-01 13:04:34.566603 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.566609 | orchestrator | 2025-11-01 13:04:34.566615 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-11-01 13:04:34.566626 | orchestrator | Saturday 01 November 2025 13:01:05 +0000 (0:00:01.498) 0:03:57.327 ***** 2025-11-01 13:04:34.566632 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-01 13:04:34.566638 | orchestrator | 2025-11-01 13:04:34.566645 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-11-01 13:04:34.566651 | orchestrator | Saturday 01 November 2025 13:01:08 +0000 (0:00:03.235) 0:04:00.562 ***** 2025-11-01 13:04:34.566662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 13:04:34.566675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-01 13:04:34.566682 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.566689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 13:04:34.566700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-01 13:04:34.566706 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.566721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 13:04:34.566728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-01 13:04:34.566735 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.566741 | orchestrator | 2025-11-01 13:04:34.566753 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-11-01 13:04:34.566759 | orchestrator | Saturday 01 November 2025 13:01:11 +0000 (0:00:02.387) 0:04:02.950 ***** 2025-11-01 13:04:34.566766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 13:04:34.566776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-01 13:04:34.566783 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.566794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 13:04:34.566806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-01 13:04:34.566813 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.566823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 13:04:34.566834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-11-01 13:04:34.566840 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.566846 | orchestrator | 2025-11-01 13:04:34.566853 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-11-01 13:04:34.566859 | orchestrator | Saturday 01 November 2025 13:01:13 +0000 (0:00:02.646) 0:04:05.596 ***** 2025-11-01 13:04:34.566865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-01 13:04:34.566876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-01 13:04:34.566883 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.566889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-01 13:04:34.566896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-01 13:04:34.566902 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.566912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-01 13:04:34.566922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-11-01 13:04:34.566929 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.566935 | orchestrator | 2025-11-01 13:04:34.566941 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-11-01 13:04:34.566947 | orchestrator | Saturday 01 November 2025 13:01:16 +0000 (0:00:03.250) 0:04:08.846 ***** 2025-11-01 13:04:34.566957 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.566964 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.566970 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.566976 | orchestrator | 2025-11-01 13:04:34.566982 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-11-01 13:04:34.566989 | orchestrator | Saturday 01 November 2025 13:01:18 +0000 (0:00:01.946) 0:04:10.793 ***** 2025-11-01 13:04:34.566995 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.567001 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.567007 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.567013 | orchestrator | 2025-11-01 13:04:34.567019 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-11-01 13:04:34.567025 | orchestrator | Saturday 01 November 2025 13:01:20 +0000 (0:00:01.551) 0:04:12.345 ***** 2025-11-01 13:04:34.567031 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.567037 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.567044 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.567050 | orchestrator | 2025-11-01 13:04:34.567056 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-11-01 13:04:34.567062 | orchestrator | Saturday 01 November 2025 13:01:20 +0000 (0:00:00.356) 0:04:12.701 ***** 2025-11-01 13:04:34.567068 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.567074 | orchestrator | 2025-11-01 13:04:34.567080 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-11-01 13:04:34.567087 | orchestrator | Saturday 01 November 2025 13:01:22 +0000 (0:00:01.571) 0:04:14.272 ***** 2025-11-01 13:04:34.567093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-11-01 13:04:34.567100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-11-01 13:04:34.567111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-11-01 13:04:34.567118 | orchestrator | 2025-11-01 13:04:34.567124 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-11-01 13:04:34.567134 | orchestrator | Saturday 01 November 2025 13:01:23 +0000 (0:00:01.606) 0:04:15.879 ***** 2025-11-01 13:04:34.567140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-11-01 13:04:34.567159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-11-01 13:04:34.567166 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.567172 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.567179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-11-01 13:04:34.567185 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.567192 | orchestrator | 2025-11-01 13:04:34.567198 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-11-01 13:04:34.567216 | orchestrator | Saturday 01 November 2025 13:01:24 +0000 (0:00:00.418) 0:04:16.297 ***** 2025-11-01 13:04:34.567222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-11-01 13:04:34.567229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-11-01 13:04:34.567236 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.567242 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.567252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-11-01 13:04:34.567263 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.567269 | orchestrator | 2025-11-01 13:04:34.567275 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-11-01 13:04:34.567281 | orchestrator | Saturday 01 November 2025 13:01:25 +0000 (0:00:00.997) 0:04:17.295 ***** 2025-11-01 13:04:34.567287 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.567293 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.567300 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.567306 | orchestrator | 2025-11-01 13:04:34.567312 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-11-01 13:04:34.567318 | orchestrator | Saturday 01 November 2025 13:01:25 +0000 (0:00:00.499) 0:04:17.795 ***** 2025-11-01 13:04:34.567324 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.567330 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.567336 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.567342 | orchestrator | 2025-11-01 13:04:34.567348 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-11-01 13:04:34.567355 | orchestrator | Saturday 01 November 2025 13:01:27 +0000 (0:00:01.436) 0:04:19.232 ***** 2025-11-01 13:04:34.567361 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.567367 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.567373 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.567379 | orchestrator | 2025-11-01 13:04:34.567385 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-11-01 13:04:34.567394 | orchestrator | Saturday 01 November 2025 13:01:27 +0000 (0:00:00.362) 0:04:19.594 ***** 2025-11-01 13:04:34.567401 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.567407 | orchestrator | 2025-11-01 13:04:34.567413 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-11-01 13:04:34.567419 | orchestrator | Saturday 01 November 2025 13:01:29 +0000 (0:00:01.657) 0:04:21.252 ***** 2025-11-01 13:04:34.567426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:04:34.567433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-01 13:04:34.567473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:04:34.567479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 13:04:34.567506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 13:04:34.567523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-01 13:04:34.567547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:04:34.567556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-01 13:04:34.567580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 13:04:34.567586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 13:04:34.567593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 13:04:34.567603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-01 13:04:34.567630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:04:34.567637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:04:34.567647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-01 13:04:34.567664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:04:34.567674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 13:04:34.567681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-01 13:04:34.567807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:04:34.567824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-01 13:04:34.567835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 13:04:34.567849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 13:04:34.567882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:04:34.567901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-01 13:04:34.567918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 13:04:34.567925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.567972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-01 13:04:34.567985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:04:34.567992 | orchestrator | 2025-11-01 13:04:34.567999 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-11-01 13:04:34.568005 | orchestrator | Saturday 01 November 2025 13:01:34 +0000 (0:00:04.808) 0:04:26.060 ***** 2025-11-01 13:04:34.568012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:04:34.568025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-01 13:04:34.568097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:04:34.568114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 13:04:34.568121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 13:04:34.568191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:04:34.568266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-01 13:04:34.568328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-01 13:04:34.568361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:04:34.568367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 13:04:34.568374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 13:04:34.568429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 13:04:34.568450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-11-01 13:04:34.568528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:04:34.568538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-01 13:04:34.568550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:04:34.568568 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.568608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 13:04:34.568617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-01 13:04:34.568626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 13:04:34.568636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 13:04:34.568642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:04:34.568674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-01 13:04:34.568693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:04:34.568703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568709 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.568715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-11-01 13:04:34.568720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-11-01 13:04:34.568726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.568746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-11-01 13:04:34.568762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:04:34.568768 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.568773 | orchestrator | 2025-11-01 13:04:34.568779 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-11-01 13:04:34.568784 | orchestrator | Saturday 01 November 2025 13:01:35 +0000 (0:00:01.673) 0:04:27.733 ***** 2025-11-01 13:04:34.568790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-11-01 13:04:34.568796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-11-01 13:04:34.568802 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.568807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-11-01 13:04:34.568813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-11-01 13:04:34.568818 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.568824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-11-01 13:04:34.568829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-11-01 13:04:34.568835 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.568848 | orchestrator | 2025-11-01 13:04:34.568854 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-11-01 13:04:34.568859 | orchestrator | Saturday 01 November 2025 13:01:38 +0000 (0:00:02.519) 0:04:30.253 ***** 2025-11-01 13:04:34.568864 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.568870 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.568875 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.568881 | orchestrator | 2025-11-01 13:04:34.568886 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-11-01 13:04:34.568891 | orchestrator | Saturday 01 November 2025 13:01:39 +0000 (0:00:01.428) 0:04:31.681 ***** 2025-11-01 13:04:34.568897 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.568902 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.568908 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.568913 | orchestrator | 2025-11-01 13:04:34.568918 | orchestrator | TASK [include_role : placement] ************************************************ 2025-11-01 13:04:34.568924 | orchestrator | Saturday 01 November 2025 13:01:42 +0000 (0:00:02.320) 0:04:34.002 ***** 2025-11-01 13:04:34.568929 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.568934 | orchestrator | 2025-11-01 13:04:34.568940 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-11-01 13:04:34.568945 | orchestrator | Saturday 01 November 2025 13:01:43 +0000 (0:00:01.407) 0:04:35.409 ***** 2025-11-01 13:04:34.568970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.568981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.568987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.568993 | orchestrator | 2025-11-01 13:04:34.568998 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-11-01 13:04:34.569004 | orchestrator | Saturday 01 November 2025 13:01:47 +0000 (0:00:04.193) 0:04:39.603 ***** 2025-11-01 13:04:34.569009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.569015 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.569034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.569044 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.569053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.569059 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.569064 | orchestrator | 2025-11-01 13:04:34.569069 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-11-01 13:04:34.569075 | orchestrator | Saturday 01 November 2025 13:01:48 +0000 (0:00:00.653) 0:04:40.256 ***** 2025-11-01 13:04:34.569080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-01 13:04:34.569086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-01 13:04:34.569092 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.569098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-01 13:04:34.569104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-01 13:04:34.569109 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.569114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-01 13:04:34.569120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-11-01 13:04:34.569125 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.569131 | orchestrator | 2025-11-01 13:04:34.569136 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-11-01 13:04:34.569141 | orchestrator | Saturday 01 November 2025 13:01:49 +0000 (0:00:00.864) 0:04:41.120 ***** 2025-11-01 13:04:34.569150 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.569156 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.569161 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.569166 | orchestrator | 2025-11-01 13:04:34.569172 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-11-01 13:04:34.569177 | orchestrator | Saturday 01 November 2025 13:01:51 +0000 (0:00:01.965) 0:04:43.086 ***** 2025-11-01 13:04:34.569183 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.569188 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.569194 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.569211 | orchestrator | 2025-11-01 13:04:34.569217 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-11-01 13:04:34.569223 | orchestrator | Saturday 01 November 2025 13:01:53 +0000 (0:00:01.965) 0:04:45.052 ***** 2025-11-01 13:04:34.569228 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.569233 | orchestrator | 2025-11-01 13:04:34.569239 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-11-01 13:04:34.569244 | orchestrator | Saturday 01 November 2025 13:01:54 +0000 (0:00:01.723) 0:04:46.775 ***** 2025-11-01 13:04:34.569265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.569276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.569282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.569288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.569298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.569317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.569327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.569333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.569343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.569349 | orchestrator | 2025-11-01 13:04:34.569354 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-11-01 13:04:34.569360 | orchestrator | Saturday 01 November 2025 13:02:00 +0000 (0:00:05.226) 0:04:52.001 ***** 2025-11-01 13:04:34.569379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.569386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.569395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.569400 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.569406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.569417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.569422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.569428 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.569447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.569457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.569463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.569473 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.569478 | orchestrator | 2025-11-01 13:04:34.569484 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-11-01 13:04:34.569489 | orchestrator | Saturday 01 November 2025 13:02:01 +0000 (0:00:01.463) 0:04:53.465 ***** 2025-11-01 13:04:34.569495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-01 13:04:34.569501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-01 13:04:34.569507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-01 13:04:34.569512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-01 13:04:34.569518 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.569523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-01 13:04:34.569529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-01 13:04:34.569534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-01 13:04:34.569540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-01 13:04:34.569559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-01 13:04:34.569565 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.569571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-11-01 13:04:34.569576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-01 13:04:34.569582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-11-01 13:04:34.569587 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.569593 | orchestrator | 2025-11-01 13:04:34.569603 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-11-01 13:04:34.569609 | orchestrator | Saturday 01 November 2025 13:02:02 +0000 (0:00:01.036) 0:04:54.501 ***** 2025-11-01 13:04:34.569618 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.569623 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.569629 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.569634 | orchestrator | 2025-11-01 13:04:34.569640 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-11-01 13:04:34.569645 | orchestrator | Saturday 01 November 2025 13:02:04 +0000 (0:00:01.436) 0:04:55.937 ***** 2025-11-01 13:04:34.569650 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.569656 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.569661 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.569666 | orchestrator | 2025-11-01 13:04:34.569672 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-11-01 13:04:34.569677 | orchestrator | Saturday 01 November 2025 13:02:06 +0000 (0:00:02.228) 0:04:58.166 ***** 2025-11-01 13:04:34.569683 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.569688 | orchestrator | 2025-11-01 13:04:34.569693 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-11-01 13:04:34.569699 | orchestrator | Saturday 01 November 2025 13:02:08 +0000 (0:00:01.887) 0:05:00.053 ***** 2025-11-01 13:04:34.569704 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-11-01 13:04:34.569710 | orchestrator | 2025-11-01 13:04:34.569715 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-11-01 13:04:34.569720 | orchestrator | Saturday 01 November 2025 13:02:09 +0000 (0:00:00.972) 0:05:01.026 ***** 2025-11-01 13:04:34.569726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-11-01 13:04:34.569732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-11-01 13:04:34.569738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-11-01 13:04:34.569744 | orchestrator | 2025-11-01 13:04:34.569749 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-11-01 13:04:34.569754 | orchestrator | Saturday 01 November 2025 13:02:13 +0000 (0:00:04.729) 0:05:05.755 ***** 2025-11-01 13:04:34.569773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-01 13:04:34.569784 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.569789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-01 13:04:34.569795 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.569804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-01 13:04:34.569809 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.569815 | orchestrator | 2025-11-01 13:04:34.569820 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-11-01 13:04:34.569826 | orchestrator | Saturday 01 November 2025 13:02:15 +0000 (0:00:01.229) 0:05:06.985 ***** 2025-11-01 13:04:34.569831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-01 13:04:34.569837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-01 13:04:34.569843 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.569848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-01 13:04:34.569854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-01 13:04:34.569859 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.569865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-01 13:04:34.569871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-11-01 13:04:34.569876 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.569882 | orchestrator | 2025-11-01 13:04:34.569887 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-11-01 13:04:34.569892 | orchestrator | Saturday 01 November 2025 13:02:16 +0000 (0:00:01.673) 0:05:08.659 ***** 2025-11-01 13:04:34.569898 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.569903 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.569909 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.569914 | orchestrator | 2025-11-01 13:04:34.569919 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-11-01 13:04:34.569925 | orchestrator | Saturday 01 November 2025 13:02:19 +0000 (0:00:02.590) 0:05:11.249 ***** 2025-11-01 13:04:34.569934 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.569939 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.569944 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.569950 | orchestrator | 2025-11-01 13:04:34.569955 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-11-01 13:04:34.569961 | orchestrator | Saturday 01 November 2025 13:02:22 +0000 (0:00:03.324) 0:05:14.573 ***** 2025-11-01 13:04:34.569979 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-11-01 13:04:34.569985 | orchestrator | 2025-11-01 13:04:34.569991 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-11-01 13:04:34.569996 | orchestrator | Saturday 01 November 2025 13:02:24 +0000 (0:00:01.632) 0:05:16.206 ***** 2025-11-01 13:04:34.570002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-01 13:04:34.570008 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.570036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-01 13:04:34.570044 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.570050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-01 13:04:34.570055 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.570061 | orchestrator | 2025-11-01 13:04:34.570066 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-11-01 13:04:34.570072 | orchestrator | Saturday 01 November 2025 13:02:25 +0000 (0:00:01.410) 0:05:17.616 ***** 2025-11-01 13:04:34.570077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-01 13:04:34.570083 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.570088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-01 13:04:34.570098 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.570103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-11-01 13:04:34.570109 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.570114 | orchestrator | 2025-11-01 13:04:34.570120 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-11-01 13:04:34.570125 | orchestrator | Saturday 01 November 2025 13:02:27 +0000 (0:00:01.479) 0:05:19.096 ***** 2025-11-01 13:04:34.570131 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.570136 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.570141 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.570147 | orchestrator | 2025-11-01 13:04:34.570168 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-11-01 13:04:34.570174 | orchestrator | Saturday 01 November 2025 13:02:29 +0000 (0:00:02.085) 0:05:21.181 ***** 2025-11-01 13:04:34.570180 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:04:34.570185 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:04:34.570191 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:04:34.570196 | orchestrator | 2025-11-01 13:04:34.570215 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-11-01 13:04:34.570221 | orchestrator | Saturday 01 November 2025 13:02:31 +0000 (0:00:02.603) 0:05:23.785 ***** 2025-11-01 13:04:34.570226 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:04:34.570231 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:04:34.570237 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:04:34.570242 | orchestrator | 2025-11-01 13:04:34.570247 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-11-01 13:04:34.570253 | orchestrator | Saturday 01 November 2025 13:02:35 +0000 (0:00:03.394) 0:05:27.179 ***** 2025-11-01 13:04:34.570258 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-11-01 13:04:34.570264 | orchestrator | 2025-11-01 13:04:34.570269 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-11-01 13:04:34.570275 | orchestrator | Saturday 01 November 2025 13:02:36 +0000 (0:00:01.025) 0:05:28.205 ***** 2025-11-01 13:04:34.570280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-01 13:04:34.570286 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.570292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-01 13:04:34.570301 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.570307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-01 13:04:34.570313 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.570318 | orchestrator | 2025-11-01 13:04:34.570324 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-11-01 13:04:34.570329 | orchestrator | Saturday 01 November 2025 13:02:37 +0000 (0:00:01.519) 0:05:29.725 ***** 2025-11-01 13:04:34.570335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-01 13:04:34.570340 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.570387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-01 13:04:34.570400 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.570424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-11-01 13:04:34.570430 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.570436 | orchestrator | 2025-11-01 13:04:34.570441 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-11-01 13:04:34.570447 | orchestrator | Saturday 01 November 2025 13:02:39 +0000 (0:00:01.592) 0:05:31.318 ***** 2025-11-01 13:04:34.570452 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.570458 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.570463 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.570468 | orchestrator | 2025-11-01 13:04:34.570474 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-11-01 13:04:34.570479 | orchestrator | Saturday 01 November 2025 13:02:41 +0000 (0:00:01.742) 0:05:33.060 ***** 2025-11-01 13:04:34.570487 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:04:34.570493 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:04:34.570498 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:04:34.570504 | orchestrator | 2025-11-01 13:04:34.570509 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-11-01 13:04:34.570515 | orchestrator | Saturday 01 November 2025 13:02:43 +0000 (0:00:02.648) 0:05:35.708 ***** 2025-11-01 13:04:34.570520 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:04:34.570525 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:04:34.570535 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:04:34.570540 | orchestrator | 2025-11-01 13:04:34.570545 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-11-01 13:04:34.570551 | orchestrator | Saturday 01 November 2025 13:02:47 +0000 (0:00:03.629) 0:05:39.338 ***** 2025-11-01 13:04:34.570556 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.570562 | orchestrator | 2025-11-01 13:04:34.570567 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-11-01 13:04:34.570573 | orchestrator | Saturday 01 November 2025 13:02:49 +0000 (0:00:01.841) 0:05:41.179 ***** 2025-11-01 13:04:34.570578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.570584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 13:04:34.570590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 13:04:34.570610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 13:04:34.570619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.570629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.570635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 13:04:34.570640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 13:04:34.570646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 13:04:34.570665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.570671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.570683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 13:04:34.570689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 13:04:34.570695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 13:04:34.570701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.570706 | orchestrator | 2025-11-01 13:04:34.570712 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-11-01 13:04:34.570717 | orchestrator | Saturday 01 November 2025 13:02:53 +0000 (0:00:03.868) 0:05:45.048 ***** 2025-11-01 13:04:34.570736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.570743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 13:04:34.570755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 13:04:34.570761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 13:04:34.570767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.570773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.570779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 13:04:34.570797 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.570804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 13:04:34.570816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 13:04:34.570822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.570827 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.570833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.570839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 13:04:34.570845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 13:04:34.570864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 13:04:34.570877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:04:34.570883 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.570888 | orchestrator | 2025-11-01 13:04:34.570894 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-11-01 13:04:34.570900 | orchestrator | Saturday 01 November 2025 13:02:53 +0000 (0:00:00.793) 0:05:45.841 ***** 2025-11-01 13:04:34.570905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-01 13:04:34.570911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-01 13:04:34.570917 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.570922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-01 13:04:34.570928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-01 13:04:34.570934 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.570939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-01 13:04:34.570944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-11-01 13:04:34.570950 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.570955 | orchestrator | 2025-11-01 13:04:34.570961 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-11-01 13:04:34.570966 | orchestrator | Saturday 01 November 2025 13:02:55 +0000 (0:00:01.710) 0:05:47.552 ***** 2025-11-01 13:04:34.570972 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.570977 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.570982 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.570988 | orchestrator | 2025-11-01 13:04:34.570993 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-11-01 13:04:34.570999 | orchestrator | Saturday 01 November 2025 13:02:57 +0000 (0:00:01.425) 0:05:48.977 ***** 2025-11-01 13:04:34.571004 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.571009 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.571015 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.571020 | orchestrator | 2025-11-01 13:04:34.571025 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-11-01 13:04:34.571031 | orchestrator | Saturday 01 November 2025 13:02:59 +0000 (0:00:02.312) 0:05:51.290 ***** 2025-11-01 13:04:34.571040 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.571045 | orchestrator | 2025-11-01 13:04:34.571051 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-11-01 13:04:34.571056 | orchestrator | Saturday 01 November 2025 13:03:00 +0000 (0:00:01.505) 0:05:52.795 ***** 2025-11-01 13:04:34.571076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 13:04:34.571088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 13:04:34.571094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 13:04:34.571100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 13:04:34.571124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 13:04:34.571135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 13:04:34.571141 | orchestrator | 2025-11-01 13:04:34.571147 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-11-01 13:04:34.571152 | orchestrator | Saturday 01 November 2025 13:03:07 +0000 (0:00:06.313) 0:05:59.109 ***** 2025-11-01 13:04:34.571158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-01 13:04:34.571164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-01 13:04:34.571174 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.571179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-01 13:04:34.571210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-01 13:04:34.571220 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.571226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-01 13:04:34.571232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-01 13:04:34.571242 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.571247 | orchestrator | 2025-11-01 13:04:34.571253 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-11-01 13:04:34.571258 | orchestrator | Saturday 01 November 2025 13:03:08 +0000 (0:00:00.846) 0:05:59.955 ***** 2025-11-01 13:04:34.571263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-11-01 13:04:34.571269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-01 13:04:34.571275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-01 13:04:34.571280 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.571286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-11-01 13:04:34.571305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-01 13:04:34.571312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-01 13:04:34.571317 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.571323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-11-01 13:04:34.571328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-01 13:04:34.571337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-11-01 13:04:34.571343 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.571348 | orchestrator | 2025-11-01 13:04:34.571354 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-11-01 13:04:34.571359 | orchestrator | Saturday 01 November 2025 13:03:09 +0000 (0:00:01.056) 0:06:01.011 ***** 2025-11-01 13:04:34.571364 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.571370 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.571375 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.571380 | orchestrator | 2025-11-01 13:04:34.571386 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-11-01 13:04:34.571391 | orchestrator | Saturday 01 November 2025 13:03:09 +0000 (0:00:00.883) 0:06:01.895 ***** 2025-11-01 13:04:34.571396 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.571402 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.571407 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.571412 | orchestrator | 2025-11-01 13:04:34.571418 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-11-01 13:04:34.571427 | orchestrator | Saturday 01 November 2025 13:03:11 +0000 (0:00:01.526) 0:06:03.421 ***** 2025-11-01 13:04:34.571432 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.571437 | orchestrator | 2025-11-01 13:04:34.571443 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-11-01 13:04:34.571448 | orchestrator | Saturday 01 November 2025 13:03:13 +0000 (0:00:01.587) 0:06:05.009 ***** 2025-11-01 13:04:34.571454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-01 13:04:34.571460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 13:04:34.571479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-01 13:04:34.571486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 13:04:34.571501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 13:04:34.571523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-01 13:04:34.571548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 13:04:34.571555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 13:04:34.571564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 13:04:34.571589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-01 13:04:34.571597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-01 13:04:34.571603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-01 13:04:34.571629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 13:04:34.571635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-01 13:04:34.571644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-01 13:04:34.571664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-01 13:04:34.571676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 13:04:34.571681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 13:04:34.571701 | orchestrator | 2025-11-01 13:04:34.571707 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-11-01 13:04:34.571713 | orchestrator | Saturday 01 November 2025 13:03:18 +0000 (0:00:04.972) 0:06:09.982 ***** 2025-11-01 13:04:34.571721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-11-01 13:04:34.571731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 13:04:34.571737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 13:04:34.571757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-11-01 13:04:34.571766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-01 13:04:34.571777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 13:04:34.571794 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.571800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-11-01 13:04:34.571806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 13:04:34.571814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 13:04:34.571838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-11-01 13:04:34.571844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-01 13:04:34.571850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 13:04:34.571873 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.571882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-11-01 13:04:34.571888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 13:04:34.571893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 13:04:34.571913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-11-01 13:04:34.571926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-11-01 13:04:34.571933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:04:34.571944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 13:04:34.571950 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.571955 | orchestrator | 2025-11-01 13:04:34.571961 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-11-01 13:04:34.571966 | orchestrator | Saturday 01 November 2025 13:03:19 +0000 (0:00:01.493) 0:06:11.475 ***** 2025-11-01 13:04:34.571972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-11-01 13:04:34.571977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-11-01 13:04:34.571983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-01 13:04:34.571989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-01 13:04:34.572000 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.572005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-11-01 13:04:34.572013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-11-01 13:04:34.572019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-01 13:04:34.572025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-01 13:04:34.572030 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.572039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-11-01 13:04:34.572045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-11-01 13:04:34.572051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-01 13:04:34.572057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-11-01 13:04:34.572062 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.572068 | orchestrator | 2025-11-01 13:04:34.572073 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-11-01 13:04:34.572078 | orchestrator | Saturday 01 November 2025 13:03:20 +0000 (0:00:01.209) 0:06:12.685 ***** 2025-11-01 13:04:34.572084 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.572089 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.572095 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.572100 | orchestrator | 2025-11-01 13:04:34.572106 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-11-01 13:04:34.572111 | orchestrator | Saturday 01 November 2025 13:03:21 +0000 (0:00:00.469) 0:06:13.154 ***** 2025-11-01 13:04:34.572116 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.572122 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.572127 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.572133 | orchestrator | 2025-11-01 13:04:34.572138 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-11-01 13:04:34.572144 | orchestrator | Saturday 01 November 2025 13:03:22 +0000 (0:00:01.602) 0:06:14.757 ***** 2025-11-01 13:04:34.572149 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.572155 | orchestrator | 2025-11-01 13:04:34.572160 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-11-01 13:04:34.572169 | orchestrator | Saturday 01 November 2025 13:03:24 +0000 (0:00:02.033) 0:06:16.790 ***** 2025-11-01 13:04:34.572175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 13:04:34.572184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 13:04:34.572193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-11-01 13:04:34.572234 | orchestrator | 2025-11-01 13:04:34.572240 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-11-01 13:04:34.572246 | orchestrator | Saturday 01 November 2025 13:03:27 +0000 (0:00:02.729) 0:06:19.520 ***** 2025-11-01 13:04:34.572251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-11-01 13:04:34.572263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-11-01 13:04:34.572269 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.572274 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.572283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-11-01 13:04:34.572289 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.572295 | orchestrator | 2025-11-01 13:04:34.572300 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-11-01 13:04:34.572308 | orchestrator | Saturday 01 November 2025 13:03:28 +0000 (0:00:00.427) 0:06:19.948 ***** 2025-11-01 13:04:34.572314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-11-01 13:04:34.572319 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.572325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-11-01 13:04:34.572330 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.572335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-11-01 13:04:34.572341 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.572346 | orchestrator | 2025-11-01 13:04:34.572351 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-11-01 13:04:34.572357 | orchestrator | Saturday 01 November 2025 13:03:29 +0000 (0:00:01.157) 0:06:21.105 ***** 2025-11-01 13:04:34.572362 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.572367 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.572373 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.572378 | orchestrator | 2025-11-01 13:04:34.572383 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-11-01 13:04:34.572392 | orchestrator | Saturday 01 November 2025 13:03:29 +0000 (0:00:00.515) 0:06:21.621 ***** 2025-11-01 13:04:34.572398 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.572403 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.572408 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.572414 | orchestrator | 2025-11-01 13:04:34.572419 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-11-01 13:04:34.572424 | orchestrator | Saturday 01 November 2025 13:03:31 +0000 (0:00:01.538) 0:06:23.159 ***** 2025-11-01 13:04:34.572430 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:04:34.572435 | orchestrator | 2025-11-01 13:04:34.572440 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-11-01 13:04:34.572446 | orchestrator | Saturday 01 November 2025 13:03:33 +0000 (0:00:02.132) 0:06:25.292 ***** 2025-11-01 13:04:34.572451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.572460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.572470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.572476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.572488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.572493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-11-01 13:04:34.572498 | orchestrator | 2025-11-01 13:04:34.572505 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-11-01 13:04:34.572510 | orchestrator | Saturday 01 November 2025 13:03:40 +0000 (0:00:07.276) 0:06:32.568 ***** 2025-11-01 13:04:34.572517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.572523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.572532 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.572537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.572542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.572547 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.572555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.572562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-11-01 13:04:34.572571 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.572576 | orchestrator | 2025-11-01 13:04:34.572581 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-11-01 13:04:34.572586 | orchestrator | Saturday 01 November 2025 13:03:41 +0000 (0:00:00.757) 0:06:33.326 ***** 2025-11-01 13:04:34.572591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-01 13:04:34.572596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-01 13:04:34.572601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-01 13:04:34.572605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-01 13:04:34.572610 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.572615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-01 13:04:34.572620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-01 13:04:34.572625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-01 13:04:34.572630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-01 13:04:34.572635 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.572639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-01 13:04:34.572646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-11-01 13:04:34.572651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-01 13:04:34.572656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-11-01 13:04:34.572661 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.572670 | orchestrator | 2025-11-01 13:04:34.572675 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-11-01 13:04:34.572680 | orchestrator | Saturday 01 November 2025 13:03:43 +0000 (0:00:01.925) 0:06:35.251 ***** 2025-11-01 13:04:34.572685 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.572689 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.572694 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.572699 | orchestrator | 2025-11-01 13:04:34.572704 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-11-01 13:04:34.572711 | orchestrator | Saturday 01 November 2025 13:03:44 +0000 (0:00:01.385) 0:06:36.636 ***** 2025-11-01 13:04:34.572716 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.572721 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.572726 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.572730 | orchestrator | 2025-11-01 13:04:34.572735 | orchestrator | TASK [include_role : swift] **************************************************** 2025-11-01 13:04:34.572740 | orchestrator | Saturday 01 November 2025 13:03:47 +0000 (0:00:02.364) 0:06:39.001 ***** 2025-11-01 13:04:34.572745 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.572749 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.572754 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.572759 | orchestrator | 2025-11-01 13:04:34.572764 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-11-01 13:04:34.572768 | orchestrator | Saturday 01 November 2025 13:03:47 +0000 (0:00:00.380) 0:06:39.381 ***** 2025-11-01 13:04:34.572773 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.572778 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.572783 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.572787 | orchestrator | 2025-11-01 13:04:34.572792 | orchestrator | TASK [include_role : trove] **************************************************** 2025-11-01 13:04:34.572797 | orchestrator | Saturday 01 November 2025 13:03:47 +0000 (0:00:00.359) 0:06:39.741 ***** 2025-11-01 13:04:34.572802 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.572807 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.572811 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.572816 | orchestrator | 2025-11-01 13:04:34.572821 | orchestrator | TASK [include_role : venus] **************************************************** 2025-11-01 13:04:34.572825 | orchestrator | Saturday 01 November 2025 13:03:48 +0000 (0:00:00.798) 0:06:40.540 ***** 2025-11-01 13:04:34.572830 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.572835 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.572840 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.572844 | orchestrator | 2025-11-01 13:04:34.572849 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-11-01 13:04:34.572854 | orchestrator | Saturday 01 November 2025 13:03:48 +0000 (0:00:00.372) 0:06:40.913 ***** 2025-11-01 13:04:34.572859 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.572863 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.572868 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.572873 | orchestrator | 2025-11-01 13:04:34.572877 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-11-01 13:04:34.572882 | orchestrator | Saturday 01 November 2025 13:03:49 +0000 (0:00:00.336) 0:06:41.249 ***** 2025-11-01 13:04:34.572887 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.572892 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.572896 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.572901 | orchestrator | 2025-11-01 13:04:34.572906 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-11-01 13:04:34.572911 | orchestrator | Saturday 01 November 2025 13:03:50 +0000 (0:00:00.964) 0:06:42.213 ***** 2025-11-01 13:04:34.572915 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:04:34.572920 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:04:34.572925 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:04:34.572929 | orchestrator | 2025-11-01 13:04:34.572934 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-11-01 13:04:34.572943 | orchestrator | Saturday 01 November 2025 13:03:51 +0000 (0:00:00.812) 0:06:43.026 ***** 2025-11-01 13:04:34.572948 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:04:34.572952 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:04:34.572957 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:04:34.572962 | orchestrator | 2025-11-01 13:04:34.572967 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-11-01 13:04:34.572971 | orchestrator | Saturday 01 November 2025 13:03:51 +0000 (0:00:00.381) 0:06:43.408 ***** 2025-11-01 13:04:34.572976 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:04:34.572981 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:04:34.572986 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:04:34.572990 | orchestrator | 2025-11-01 13:04:34.572995 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-11-01 13:04:34.573000 | orchestrator | Saturday 01 November 2025 13:03:52 +0000 (0:00:00.960) 0:06:44.368 ***** 2025-11-01 13:04:34.573004 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:04:34.573009 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:04:34.573014 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:04:34.573018 | orchestrator | 2025-11-01 13:04:34.573023 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-11-01 13:04:34.573028 | orchestrator | Saturday 01 November 2025 13:03:53 +0000 (0:00:01.319) 0:06:45.688 ***** 2025-11-01 13:04:34.573033 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:04:34.573037 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:04:34.573044 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:04:34.573049 | orchestrator | 2025-11-01 13:04:34.573054 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-11-01 13:04:34.573059 | orchestrator | Saturday 01 November 2025 13:03:54 +0000 (0:00:00.982) 0:06:46.670 ***** 2025-11-01 13:04:34.573063 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.573068 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.573073 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.573078 | orchestrator | 2025-11-01 13:04:34.573082 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-11-01 13:04:34.573087 | orchestrator | Saturday 01 November 2025 13:04:05 +0000 (0:00:10.653) 0:06:57.324 ***** 2025-11-01 13:04:34.573092 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:04:34.573097 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:04:34.573101 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:04:34.573106 | orchestrator | 2025-11-01 13:04:34.573111 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-11-01 13:04:34.573116 | orchestrator | Saturday 01 November 2025 13:04:06 +0000 (0:00:00.726) 0:06:58.050 ***** 2025-11-01 13:04:34.573120 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.573125 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.573130 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.573135 | orchestrator | 2025-11-01 13:04:34.573139 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-11-01 13:04:34.573144 | orchestrator | Saturday 01 November 2025 13:04:15 +0000 (0:00:09.577) 0:07:07.628 ***** 2025-11-01 13:04:34.573152 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:04:34.573156 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:04:34.573161 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:04:34.573166 | orchestrator | 2025-11-01 13:04:34.573171 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-11-01 13:04:34.573175 | orchestrator | Saturday 01 November 2025 13:04:20 +0000 (0:00:04.305) 0:07:11.934 ***** 2025-11-01 13:04:34.573180 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:04:34.573185 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:04:34.573190 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:04:34.573194 | orchestrator | 2025-11-01 13:04:34.573211 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-11-01 13:04:34.573216 | orchestrator | Saturday 01 November 2025 13:04:28 +0000 (0:00:08.444) 0:07:20.379 ***** 2025-11-01 13:04:34.573224 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.573229 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.573234 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.573238 | orchestrator | 2025-11-01 13:04:34.573243 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-11-01 13:04:34.573248 | orchestrator | Saturday 01 November 2025 13:04:28 +0000 (0:00:00.363) 0:07:20.742 ***** 2025-11-01 13:04:34.573253 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.573257 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.573262 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.573267 | orchestrator | 2025-11-01 13:04:34.573272 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-11-01 13:04:34.573276 | orchestrator | Saturday 01 November 2025 13:04:29 +0000 (0:00:00.382) 0:07:21.125 ***** 2025-11-01 13:04:34.573281 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.573286 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.573291 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.573295 | orchestrator | 2025-11-01 13:04:34.573300 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-11-01 13:04:34.573305 | orchestrator | Saturday 01 November 2025 13:04:29 +0000 (0:00:00.759) 0:07:21.884 ***** 2025-11-01 13:04:34.573310 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.573314 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.573319 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.573324 | orchestrator | 2025-11-01 13:04:34.573329 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-11-01 13:04:34.573333 | orchestrator | Saturday 01 November 2025 13:04:30 +0000 (0:00:00.405) 0:07:22.290 ***** 2025-11-01 13:04:34.573338 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.573343 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.573348 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.573352 | orchestrator | 2025-11-01 13:04:34.573357 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-11-01 13:04:34.573362 | orchestrator | Saturday 01 November 2025 13:04:30 +0000 (0:00:00.383) 0:07:22.674 ***** 2025-11-01 13:04:34.573367 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:04:34.573371 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:04:34.573376 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:04:34.573381 | orchestrator | 2025-11-01 13:04:34.573386 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-11-01 13:04:34.573390 | orchestrator | Saturday 01 November 2025 13:04:31 +0000 (0:00:00.374) 0:07:23.049 ***** 2025-11-01 13:04:34.573395 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:04:34.573400 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:04:34.573405 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:04:34.573409 | orchestrator | 2025-11-01 13:04:34.573414 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-11-01 13:04:34.573419 | orchestrator | Saturday 01 November 2025 13:04:32 +0000 (0:00:01.428) 0:07:24.478 ***** 2025-11-01 13:04:34.573424 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:04:34.573428 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:04:34.573433 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:04:34.573438 | orchestrator | 2025-11-01 13:04:34.573443 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:04:34.573447 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-11-01 13:04:34.573453 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-11-01 13:04:34.573457 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-11-01 13:04:34.573466 | orchestrator | 2025-11-01 13:04:34.573471 | orchestrator | 2025-11-01 13:04:34.573478 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:04:34.573483 | orchestrator | Saturday 01 November 2025 13:04:33 +0000 (0:00:00.913) 0:07:25.391 ***** 2025-11-01 13:04:34.573488 | orchestrator | =============================================================================== 2025-11-01 13:04:34.573493 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.65s 2025-11-01 13:04:34.573497 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.58s 2025-11-01 13:04:34.573502 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.44s 2025-11-01 13:04:34.573507 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 8.05s 2025-11-01 13:04:34.573512 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.28s 2025-11-01 13:04:34.573516 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.31s 2025-11-01 13:04:34.573521 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.09s 2025-11-01 13:04:34.573526 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.58s 2025-11-01 13:04:34.573531 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.48s 2025-11-01 13:04:34.573539 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.38s 2025-11-01 13:04:34.573544 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 5.34s 2025-11-01 13:04:34.573549 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 5.33s 2025-11-01 13:04:34.573554 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.23s 2025-11-01 13:04:34.573559 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.18s 2025-11-01 13:04:34.573563 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.97s 2025-11-01 13:04:34.573568 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.94s 2025-11-01 13:04:34.573573 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.81s 2025-11-01 13:04:34.573577 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.73s 2025-11-01 13:04:34.573582 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.71s 2025-11-01 13:04:34.573587 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.43s 2025-11-01 13:04:34.573592 | orchestrator | 2025-11-01 13:04:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:04:37.604874 | orchestrator | 2025-11-01 13:04:37 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:04:37.606973 | orchestrator | 2025-11-01 13:04:37 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:04:37.608829 | orchestrator | 2025-11-01 13:04:37 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:04:37.608849 | orchestrator | 2025-11-01 13:04:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:04:40.662329 | orchestrator | 2025-11-01 13:04:40 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:04:40.664023 | orchestrator | 2025-11-01 13:04:40 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:04:40.666426 | orchestrator | 2025-11-01 13:04:40 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:04:40.666898 | orchestrator | 2025-11-01 13:04:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:04:43.708725 | orchestrator | 2025-11-01 13:04:43 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:04:43.711111 | orchestrator | 2025-11-01 13:04:43 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:04:43.712510 | orchestrator | 2025-11-01 13:04:43 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:04:43.712885 | orchestrator | 2025-11-01 13:04:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:04:46.755665 | orchestrator | 2025-11-01 13:04:46 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:04:46.757384 | orchestrator | 2025-11-01 13:04:46 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:04:46.758395 | orchestrator | 2025-11-01 13:04:46 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:04:46.758421 | orchestrator | 2025-11-01 13:04:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:04:49.804777 | orchestrator | 2025-11-01 13:04:49 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:04:49.806481 | orchestrator | 2025-11-01 13:04:49 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:04:49.808847 | orchestrator | 2025-11-01 13:04:49 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:04:49.808873 | orchestrator | 2025-11-01 13:04:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:04:52.848061 | orchestrator | 2025-11-01 13:04:52 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:04:52.849900 | orchestrator | 2025-11-01 13:04:52 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:04:52.852380 | orchestrator | 2025-11-01 13:04:52 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:04:52.852829 | orchestrator | 2025-11-01 13:04:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:04:55.892004 | orchestrator | 2025-11-01 13:04:55 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:04:55.893078 | orchestrator | 2025-11-01 13:04:55 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:04:55.893918 | orchestrator | 2025-11-01 13:04:55 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:04:55.894071 | orchestrator | 2025-11-01 13:04:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:04:58.934105 | orchestrator | 2025-11-01 13:04:58 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:04:58.935155 | orchestrator | 2025-11-01 13:04:58 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:04:58.936315 | orchestrator | 2025-11-01 13:04:58 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:04:58.936345 | orchestrator | 2025-11-01 13:04:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:05:01.999396 | orchestrator | 2025-11-01 13:05:01 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:05:01.999469 | orchestrator | 2025-11-01 13:05:01 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:05:01.999478 | orchestrator | 2025-11-01 13:05:01 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:05:01.999487 | orchestrator | 2025-11-01 13:05:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:05:05.067343 | orchestrator | 2025-11-01 13:05:05 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:05:05.068464 | orchestrator | 2025-11-01 13:05:05 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:05:05.070508 | orchestrator | 2025-11-01 13:05:05 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:05:05.070753 | orchestrator | 2025-11-01 13:05:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:05:08.106668 | orchestrator | 2025-11-01 13:05:08 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:05:08.107549 | orchestrator | 2025-11-01 13:05:08 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:05:08.108448 | orchestrator | 2025-11-01 13:05:08 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:05:08.108756 | orchestrator | 2025-11-01 13:05:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:05:11.186531 | orchestrator | 2025-11-01 13:05:11 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:05:11.186612 | orchestrator | 2025-11-01 13:05:11 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:05:11.186627 | orchestrator | 2025-11-01 13:05:11 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:05:11.186638 | orchestrator | 2025-11-01 13:05:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:05:14.217801 | orchestrator | 2025-11-01 13:05:14 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:05:14.220822 | orchestrator | 2025-11-01 13:05:14 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:05:14.223079 | orchestrator | 2025-11-01 13:05:14 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:05:14.223394 | orchestrator | 2025-11-01 13:05:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:05:17.276043 | orchestrator | 2025-11-01 13:05:17 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:05:17.278124 | orchestrator | 2025-11-01 13:05:17 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:05:17.279096 | orchestrator | 2025-11-01 13:05:17 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:05:17.279545 | orchestrator | 2025-11-01 13:05:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:05:20.317439 | orchestrator | 2025-11-01 13:05:20 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:05:20.318536 | orchestrator | 2025-11-01 13:05:20 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:05:20.319352 | orchestrator | 2025-11-01 13:05:20 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:05:20.319377 | orchestrator | 2025-11-01 13:05:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:05:23.349645 | orchestrator | 2025-11-01 13:05:23 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:05:23.350756 | orchestrator | 2025-11-01 13:05:23 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:05:23.352125 | orchestrator | 2025-11-01 13:05:23 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:05:23.352167 | orchestrator | 2025-11-01 13:05:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:05:26.391093 | orchestrator | 2025-11-01 13:05:26 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:05:26.391926 | orchestrator | 2025-11-01 13:05:26 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:05:26.393071 | orchestrator | 2025-11-01 13:05:26 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:05:26.393121 | orchestrator | 2025-11-01 13:05:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:05:29.431432 | orchestrator | 2025-11-01 13:05:29 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:05:29.433348 | orchestrator | 2025-11-01 13:05:29 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:05:29.435794 | orchestrator | 2025-11-01 13:05:29 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:05:29.435820 | orchestrator | 2025-11-01 13:05:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:05:32.476708 | orchestrator | 2025-11-01 13:05:32 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:05:32.477517 | orchestrator | 2025-11-01 13:05:32 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:05:32.478513 | orchestrator | 2025-11-01 13:05:32 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:05:32.478645 | orchestrator | 2025-11-01 13:05:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:05:35.518920 | orchestrator | 2025-11-01 13:05:35 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:05:35.520252 | orchestrator | 2025-11-01 13:05:35 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:05:35.521577 | orchestrator | 2025-11-01 13:05:35 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:05:35.521663 | orchestrator | 2025-11-01 13:05:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:05:38.573777 | orchestrator | 2025-11-01 13:05:38 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:05:38.574476 | orchestrator | 2025-11-01 13:05:38 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:05:38.576119 | orchestrator | 2025-11-01 13:05:38 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:05:38.576141 | orchestrator | 2025-11-01 13:05:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:05:41.615415 | orchestrator | 2025-11-01 13:05:41 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:05:41.617239 | orchestrator | 2025-11-01 13:05:41 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:05:41.618375 | orchestrator | 2025-11-01 13:05:41 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:05:41.618405 | orchestrator | 2025-11-01 13:05:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:05:44.659809 | orchestrator | 2025-11-01 13:05:44 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:05:44.660334 | orchestrator | 2025-11-01 13:05:44 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:05:44.661305 | orchestrator | 2025-11-01 13:05:44 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:05:44.661325 | orchestrator | 2025-11-01 13:05:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:05:47.696770 | orchestrator | 2025-11-01 13:05:47 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:05:47.697648 | orchestrator | 2025-11-01 13:05:47 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:05:47.698991 | orchestrator | 2025-11-01 13:05:47 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:05:47.699007 | orchestrator | 2025-11-01 13:05:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:05:50.749876 | orchestrator | 2025-11-01 13:05:50 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:05:50.751330 | orchestrator | 2025-11-01 13:05:50 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:05:50.752310 | orchestrator | 2025-11-01 13:05:50 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:05:50.752489 | orchestrator | 2025-11-01 13:05:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:05:53.796750 | orchestrator | 2025-11-01 13:05:53 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:05:53.798246 | orchestrator | 2025-11-01 13:05:53 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:05:53.800567 | orchestrator | 2025-11-01 13:05:53 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:05:53.800591 | orchestrator | 2025-11-01 13:05:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:05:56.845711 | orchestrator | 2025-11-01 13:05:56 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:05:56.847716 | orchestrator | 2025-11-01 13:05:56 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:05:56.849509 | orchestrator | 2025-11-01 13:05:56 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:05:56.849533 | orchestrator | 2025-11-01 13:05:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:05:59.895663 | orchestrator | 2025-11-01 13:05:59 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:05:59.898818 | orchestrator | 2025-11-01 13:05:59 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:05:59.901948 | orchestrator | 2025-11-01 13:05:59 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:05:59.901971 | orchestrator | 2025-11-01 13:05:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:06:02.949915 | orchestrator | 2025-11-01 13:06:02 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:06:02.950893 | orchestrator | 2025-11-01 13:06:02 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:06:02.952698 | orchestrator | 2025-11-01 13:06:02 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:06:02.952813 | orchestrator | 2025-11-01 13:06:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:06:06.009849 | orchestrator | 2025-11-01 13:06:06 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:06:06.011139 | orchestrator | 2025-11-01 13:06:06 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:06:06.012981 | orchestrator | 2025-11-01 13:06:06 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:06:06.013004 | orchestrator | 2025-11-01 13:06:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:06:09.073853 | orchestrator | 2025-11-01 13:06:09 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:06:09.075593 | orchestrator | 2025-11-01 13:06:09 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:06:09.078197 | orchestrator | 2025-11-01 13:06:09 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:06:09.078267 | orchestrator | 2025-11-01 13:06:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:06:12.124692 | orchestrator | 2025-11-01 13:06:12 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:06:12.125791 | orchestrator | 2025-11-01 13:06:12 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:06:12.127339 | orchestrator | 2025-11-01 13:06:12 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:06:12.127365 | orchestrator | 2025-11-01 13:06:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:06:15.165408 | orchestrator | 2025-11-01 13:06:15 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:06:15.166432 | orchestrator | 2025-11-01 13:06:15 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:06:15.167543 | orchestrator | 2025-11-01 13:06:15 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:06:15.167563 | orchestrator | 2025-11-01 13:06:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:06:18.205807 | orchestrator | 2025-11-01 13:06:18 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:06:18.207010 | orchestrator | 2025-11-01 13:06:18 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:06:18.208573 | orchestrator | 2025-11-01 13:06:18 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:06:18.208617 | orchestrator | 2025-11-01 13:06:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:06:21.246373 | orchestrator | 2025-11-01 13:06:21 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:06:21.247697 | orchestrator | 2025-11-01 13:06:21 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:06:21.249600 | orchestrator | 2025-11-01 13:06:21 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:06:21.249622 | orchestrator | 2025-11-01 13:06:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:06:24.285291 | orchestrator | 2025-11-01 13:06:24 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:06:24.287055 | orchestrator | 2025-11-01 13:06:24 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:06:24.290156 | orchestrator | 2025-11-01 13:06:24 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:06:24.290182 | orchestrator | 2025-11-01 13:06:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:06:27.329039 | orchestrator | 2025-11-01 13:06:27 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:06:27.331163 | orchestrator | 2025-11-01 13:06:27 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state STARTED 2025-11-01 13:06:27.332769 | orchestrator | 2025-11-01 13:06:27 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:06:27.332954 | orchestrator | 2025-11-01 13:06:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:06:30.375706 | orchestrator | 2025-11-01 13:06:30 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:06:30.382873 | orchestrator | 2025-11-01 13:06:30.382906 | orchestrator | 2025-11-01 13:06:30.382918 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-11-01 13:06:30.382929 | orchestrator | 2025-11-01 13:06:30.382939 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-11-01 13:06:30.382949 | orchestrator | Saturday 01 November 2025 12:54:05 +0000 (0:00:01.379) 0:00:01.379 ***** 2025-11-01 13:06:30.382961 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.382995 | orchestrator | 2025-11-01 13:06:30.383006 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-11-01 13:06:30.383016 | orchestrator | Saturday 01 November 2025 12:54:07 +0000 (0:00:01.744) 0:00:03.124 ***** 2025-11-01 13:06:30.383026 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.383036 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.383046 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.383055 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.383065 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.383074 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.383084 | orchestrator | 2025-11-01 13:06:30.383093 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-11-01 13:06:30.383103 | orchestrator | Saturday 01 November 2025 12:54:09 +0000 (0:00:02.435) 0:00:05.560 ***** 2025-11-01 13:06:30.383113 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.383122 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.383132 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.383141 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.383151 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.383160 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.383170 | orchestrator | 2025-11-01 13:06:30.383179 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-11-01 13:06:30.383189 | orchestrator | Saturday 01 November 2025 12:54:10 +0000 (0:00:01.265) 0:00:06.825 ***** 2025-11-01 13:06:30.383240 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.383251 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.383261 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.383270 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.383279 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.383289 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.383298 | orchestrator | 2025-11-01 13:06:30.383308 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-11-01 13:06:30.383318 | orchestrator | Saturday 01 November 2025 12:54:11 +0000 (0:00:01.087) 0:00:07.912 ***** 2025-11-01 13:06:30.383327 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.383337 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.383346 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.383356 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.383365 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.383375 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.383384 | orchestrator | 2025-11-01 13:06:30.383394 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-11-01 13:06:30.383404 | orchestrator | Saturday 01 November 2025 12:54:13 +0000 (0:00:01.273) 0:00:09.186 ***** 2025-11-01 13:06:30.383413 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.383423 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.383432 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.383442 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.383451 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.383461 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.383470 | orchestrator | 2025-11-01 13:06:30.383480 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-11-01 13:06:30.383490 | orchestrator | Saturday 01 November 2025 12:54:14 +0000 (0:00:00.890) 0:00:10.077 ***** 2025-11-01 13:06:30.383499 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.383509 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.383519 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.383529 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.383538 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.383548 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.383557 | orchestrator | 2025-11-01 13:06:30.383580 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-11-01 13:06:30.383591 | orchestrator | Saturday 01 November 2025 12:54:15 +0000 (0:00:01.574) 0:00:11.651 ***** 2025-11-01 13:06:30.383600 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.383611 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.383629 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.383639 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.383648 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.383657 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.383667 | orchestrator | 2025-11-01 13:06:30.383677 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-11-01 13:06:30.383686 | orchestrator | Saturday 01 November 2025 12:54:17 +0000 (0:00:01.379) 0:00:13.031 ***** 2025-11-01 13:06:30.383696 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.383706 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.383715 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.383725 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.383734 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.383744 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.383753 | orchestrator | 2025-11-01 13:06:30.383763 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-11-01 13:06:30.383773 | orchestrator | Saturday 01 November 2025 12:54:18 +0000 (0:00:01.364) 0:00:14.395 ***** 2025-11-01 13:06:30.383782 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-01 13:06:30.383792 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-01 13:06:30.383802 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-01 13:06:30.383811 | orchestrator | 2025-11-01 13:06:30.383821 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-11-01 13:06:30.383830 | orchestrator | Saturday 01 November 2025 12:54:19 +0000 (0:00:00.718) 0:00:15.113 ***** 2025-11-01 13:06:30.383840 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.383849 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.383859 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.383868 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.383878 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.383887 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.383897 | orchestrator | 2025-11-01 13:06:30.383917 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-11-01 13:06:30.383927 | orchestrator | Saturday 01 November 2025 12:54:21 +0000 (0:00:02.166) 0:00:17.280 ***** 2025-11-01 13:06:30.383937 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-01 13:06:30.384302 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-01 13:06:30.384315 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-01 13:06:30.384325 | orchestrator | 2025-11-01 13:06:30.384334 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-11-01 13:06:30.384344 | orchestrator | Saturday 01 November 2025 12:54:25 +0000 (0:00:04.006) 0:00:21.287 ***** 2025-11-01 13:06:30.384354 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-01 13:06:30.384364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-01 13:06:30.384374 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-01 13:06:30.384384 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.384393 | orchestrator | 2025-11-01 13:06:30.384403 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-11-01 13:06:30.384412 | orchestrator | Saturday 01 November 2025 12:54:26 +0000 (0:00:01.112) 0:00:22.399 ***** 2025-11-01 13:06:30.384424 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.384437 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.384455 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.384466 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.384475 | orchestrator | 2025-11-01 13:06:30.384485 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-11-01 13:06:30.384494 | orchestrator | Saturday 01 November 2025 12:54:28 +0000 (0:00:01.680) 0:00:24.079 ***** 2025-11-01 13:06:30.384507 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.384526 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.384537 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.384547 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.384557 | orchestrator | 2025-11-01 13:06:30.384566 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-11-01 13:06:30.384576 | orchestrator | Saturday 01 November 2025 12:54:28 +0000 (0:00:00.624) 0:00:24.703 ***** 2025-11-01 13:06:30.384596 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-11-01 12:54:22.547851', 'end': '2025-11-01 12:54:22.864937', 'delta': '0:00:00.317086', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.384609 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-11-01 12:54:23.929384', 'end': '2025-11-01 12:54:24.241149', 'delta': '0:00:00.311765', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.384620 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-11-01 12:54:24.865949', 'end': '2025-11-01 12:54:25.157728', 'delta': '0:00:00.291779', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.384636 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.384646 | orchestrator | 2025-11-01 13:06:30.384656 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-11-01 13:06:30.384666 | orchestrator | Saturday 01 November 2025 12:54:29 +0000 (0:00:00.306) 0:00:25.010 ***** 2025-11-01 13:06:30.384675 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.384685 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.384695 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.384704 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.384714 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.384723 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.384733 | orchestrator | 2025-11-01 13:06:30.384742 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-11-01 13:06:30.384752 | orchestrator | Saturday 01 November 2025 12:54:33 +0000 (0:00:04.382) 0:00:29.393 ***** 2025-11-01 13:06:30.384762 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-01 13:06:30.384772 | orchestrator | 2025-11-01 13:06:30.384782 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-11-01 13:06:30.384791 | orchestrator | Saturday 01 November 2025 12:54:34 +0000 (0:00:01.029) 0:00:30.423 ***** 2025-11-01 13:06:30.384801 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.384811 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.384821 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.384831 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.384840 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.384850 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.384859 | orchestrator | 2025-11-01 13:06:30.384874 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-11-01 13:06:30.384884 | orchestrator | Saturday 01 November 2025 12:54:37 +0000 (0:00:03.059) 0:00:33.483 ***** 2025-11-01 13:06:30.384893 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.384903 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.384912 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.384922 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.384932 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.384941 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.384951 | orchestrator | 2025-11-01 13:06:30.384961 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-11-01 13:06:30.384970 | orchestrator | Saturday 01 November 2025 12:54:39 +0000 (0:00:02.199) 0:00:35.682 ***** 2025-11-01 13:06:30.384980 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.384989 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.384999 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.385009 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.385018 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.385028 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.385037 | orchestrator | 2025-11-01 13:06:30.385047 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-11-01 13:06:30.385057 | orchestrator | Saturday 01 November 2025 12:54:43 +0000 (0:00:03.910) 0:00:39.593 ***** 2025-11-01 13:06:30.385569 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.385590 | orchestrator | 2025-11-01 13:06:30.385600 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-11-01 13:06:30.385610 | orchestrator | Saturday 01 November 2025 12:54:44 +0000 (0:00:00.347) 0:00:39.941 ***** 2025-11-01 13:06:30.385619 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.385637 | orchestrator | 2025-11-01 13:06:30.385647 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-11-01 13:06:30.385656 | orchestrator | Saturday 01 November 2025 12:54:44 +0000 (0:00:00.633) 0:00:40.574 ***** 2025-11-01 13:06:30.385666 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.385676 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.385685 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.385695 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.385704 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.385714 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.385724 | orchestrator | 2025-11-01 13:06:30.385760 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-11-01 13:06:30.385771 | orchestrator | Saturday 01 November 2025 12:54:45 +0000 (0:00:01.296) 0:00:41.871 ***** 2025-11-01 13:06:30.385781 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.385791 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.385800 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.385810 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.385819 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.385829 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.385839 | orchestrator | 2025-11-01 13:06:30.385848 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-11-01 13:06:30.385858 | orchestrator | Saturday 01 November 2025 12:54:47 +0000 (0:00:01.873) 0:00:43.745 ***** 2025-11-01 13:06:30.385868 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.385877 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.385887 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.385896 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.385906 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.385915 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.385925 | orchestrator | 2025-11-01 13:06:30.385934 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-11-01 13:06:30.385944 | orchestrator | Saturday 01 November 2025 12:54:48 +0000 (0:00:00.985) 0:00:44.730 ***** 2025-11-01 13:06:30.385953 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.386080 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.386091 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.386100 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.386110 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.386119 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.386129 | orchestrator | 2025-11-01 13:06:30.386138 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-11-01 13:06:30.386148 | orchestrator | Saturday 01 November 2025 12:54:50 +0000 (0:00:01.410) 0:00:46.141 ***** 2025-11-01 13:06:30.386158 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.386167 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.386177 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.386188 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.386221 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.386233 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.386244 | orchestrator | 2025-11-01 13:06:30.386256 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-11-01 13:06:30.386267 | orchestrator | Saturday 01 November 2025 12:54:51 +0000 (0:00:01.125) 0:00:47.266 ***** 2025-11-01 13:06:30.386278 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.386326 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.386339 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.386350 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.386360 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.386371 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.386382 | orchestrator | 2025-11-01 13:06:30.386393 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-11-01 13:06:30.386816 | orchestrator | Saturday 01 November 2025 12:54:52 +0000 (0:00:01.192) 0:00:48.459 ***** 2025-11-01 13:06:30.386841 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.386851 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.386860 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.386870 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.386879 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.386889 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.386898 | orchestrator | 2025-11-01 13:06:30.386908 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-11-01 13:06:30.386918 | orchestrator | Saturday 01 November 2025 12:54:53 +0000 (0:00:00.864) 0:00:49.323 ***** 2025-11-01 13:06:30.386935 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d83d2135--3529--5759--9738--6f5d85bcdaef-osd--block--d83d2135--3529--5759--9738--6f5d85bcdaef', 'dm-uuid-LVM-fDTpWClBjZ9Us4p8lfhtANZK4vLC820taYmvievWGuCmwhc1CUqHSku1d3Bz3JoN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.386948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2d34deeb--c147--51f6--865b--40ba131b62ad-osd--block--2d34deeb--c147--51f6--865b--40ba131b62ad', 'dm-uuid-LVM-pdVlv3KVvcnVqMGiaDwkB46kRxQvUgkdTSSCTkG3pO8o7wfA4qNO43l3AMC7123p'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.387281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.387303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.387314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.387325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.387336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.387633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.387659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.387669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.387721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--277f9d3d--0c20--556e--833f--7bea0f2408d1-osd--block--277f9d3d--0c20--556e--833f--7bea0f2408d1', 'dm-uuid-LVM-ehIfJlPiGc2uigZOsqopqiFLANEOix1Xdj3JmMubFLusgjuIjBl2BirrwsTyULbt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11', 'scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part1', 'scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part14', 'scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part15', 'scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part16', 'scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.388103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d83d2135--3529--5759--9738--6f5d85bcdaef-osd--block--d83d2135--3529--5759--9738--6f5d85bcdaef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yXwX0t-f9iC-iCcI-yN1X-tJVr-wlAT-2Kim49', 'scsi-0QEMU_QEMU_HARDDISK_6d5232a6-49c3-4ba2-8072-69b94c6f6826', 'scsi-SQEMU_QEMU_HARDDISK_6d5232a6-49c3-4ba2-8072-69b94c6f6826'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.388121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--780930f3--bf13--5252--a15a--5f9f469ca774-osd--block--780930f3--bf13--5252--a15a--5f9f469ca774', 'dm-uuid-LVM-r8tx29ZMlQVBBI2chiCFB5cyO1pdwz8LLdk2mRfOHl9NPN3EYVCijdNqhnTrpmNE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2d34deeb--c147--51f6--865b--40ba131b62ad-osd--block--2d34deeb--c147--51f6--865b--40ba131b62ad'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i4h7F2-YliL-SO9h-Csgw-3CJZ-dMHf-khJwX8', 'scsi-0QEMU_QEMU_HARDDISK_caf17145-8e33-4113-9dc7-3e1268f339ef', 'scsi-SQEMU_QEMU_HARDDISK_caf17145-8e33-4113-9dc7-3e1268f339ef'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.388405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b36ff255-a328-4794-8843-53478b92bf6f', 'scsi-SQEMU_QEMU_HARDDISK_b36ff255-a328-4794-8843-53478b92bf6f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.388435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-12-08-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.388471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388497 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df', 'scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part1', 'scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part14', 'scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part15', 'scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part16', 'scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.388626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--277f9d3d--0c20--556e--833f--7bea0f2408d1-osd--block--277f9d3d--0c20--556e--833f--7bea0f2408d1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EUQzMZ-PMCb-pPIT-wNZG-jNnZ-MeqS-hGDXl3', 'scsi-0QEMU_QEMU_HARDDISK_5ce69623-bff4-4254-af6b-7ef1616921db', 'scsi-SQEMU_QEMU_HARDDISK_5ce69623-bff4-4254-af6b-7ef1616921db'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.388637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--780930f3--bf13--5252--a15a--5f9f469ca774-osd--block--780930f3--bf13--5252--a15a--5f9f469ca774'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EP6xe2-EOUm-XIoM-j2QM-F2TH-29vI-sqP21Z', 'scsi-0QEMU_QEMU_HARDDISK_0d74391b-0b8f-495c-a577-c6c4d7ebf805', 'scsi-SQEMU_QEMU_HARDDISK_0d74391b-0b8f-495c-a577-c6c4d7ebf805'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.388706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce385ad4-e039-43b9-b94b-c72aec6ecf03', 'scsi-SQEMU_QEMU_HARDDISK_ce385ad4-e039-43b9-b94b-c72aec6ecf03'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.388721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-12-08-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.388738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fea132eb--9454--553c--8b4e--faa263198857-osd--block--fea132eb--9454--553c--8b4e--faa263198857', 'dm-uuid-LVM-TGwqzibQaZzWxavZ7bwpJb5Bm19fuc8dzN6cYcJaRnPY0G7QShibxCft9nmjmY1y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1e995aa1--0e3d--5a0e--8d57--e00715a81a73-osd--block--1e995aa1--0e3d--5a0e--8d57--e00715a81a73', 'dm-uuid-LVM-oMXBBs41xleAnydzHhAkrMLMr1a9xjvhy4VgzGoZt3LJiPBQovI0rmeIzq1Qo46Z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388774 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.388785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388871 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.388996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.389077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.389100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_598994d1-b5fd-49e7-a955-1ee24af64c72', 'scsi-SQEMU_QEMU_HARDDISK_598994d1-b5fd-49e7-a955-1ee24af64c72'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_598994d1-b5fd-49e7-a955-1ee24af64c72-part1', 'scsi-SQEMU_QEMU_HARDDISK_598994d1-b5fd-49e7-a955-1ee24af64c72-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_598994d1-b5fd-49e7-a955-1ee24af64c72-part14', 'scsi-SQEMU_QEMU_HARDDISK_598994d1-b5fd-49e7-a955-1ee24af64c72-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_598994d1-b5fd-49e7-a955-1ee24af64c72-part15', 'scsi-SQEMU_QEMU_HARDDISK_598994d1-b5fd-49e7-a955-1ee24af64c72-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_598994d1-b5fd-49e7-a955-1ee24af64c72-part16', 'scsi-SQEMU_QEMU_HARDDISK_598994d1-b5fd-49e7-a955-1ee24af64c72-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.389117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-12-08-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.389128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.389196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a', 'scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part1', 'scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part14', 'scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part15', 'scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part16', 'scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.389240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--fea132eb--9454--553c--8b4e--faa263198857-osd--block--fea132eb--9454--553c--8b4e--faa263198857'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ogQA8B-7r4e-vqdN-CHaY-teu3-lKEI-fsce3l', 'scsi-0QEMU_QEMU_HARDDISK_7bcb822d-7f3b-4eca-ac83-a3c6a6b727aa', 'scsi-SQEMU_QEMU_HARDDISK_7bcb822d-7f3b-4eca-ac83-a3c6a6b727aa'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.389256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1e995aa1--0e3d--5a0e--8d57--e00715a81a73-osd--block--1e995aa1--0e3d--5a0e--8d57--e00715a81a73'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MqJV1c-Heey-RafL-3X6C-WDDi-I2Rp-LWSwHf', 'scsi-0QEMU_QEMU_HARDDISK_bacac2a1-f096-4371-9863-988edf40b0d8', 'scsi-SQEMU_QEMU_HARDDISK_bacac2a1-f096-4371-9863-988edf40b0d8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.389267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_79b0442c-a1d2-4926-aa81-9c91c373f6dc', 'scsi-SQEMU_QEMU_HARDDISK_79b0442c-a1d2-4926-aa81-9c91c373f6dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.389352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-12-08-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.389374 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.389385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.389395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.389405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.389415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.389425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.389440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.389450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.389460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.389533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b738f3f-811f-4b8a-84ab-a2aefc3daf42', 'scsi-SQEMU_QEMU_HARDDISK_6b738f3f-811f-4b8a-84ab-a2aefc3daf42'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b738f3f-811f-4b8a-84ab-a2aefc3daf42-part1', 'scsi-SQEMU_QEMU_HARDDISK_6b738f3f-811f-4b8a-84ab-a2aefc3daf42-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b738f3f-811f-4b8a-84ab-a2aefc3daf42-part14', 'scsi-SQEMU_QEMU_HARDDISK_6b738f3f-811f-4b8a-84ab-a2aefc3daf42-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b738f3f-811f-4b8a-84ab-a2aefc3daf42-part15', 'scsi-SQEMU_QEMU_HARDDISK_6b738f3f-811f-4b8a-84ab-a2aefc3daf42-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b738f3f-811f-4b8a-84ab-a2aefc3daf42-part16', 'scsi-SQEMU_QEMU_HARDDISK_6b738f3f-811f-4b8a-84ab-a2aefc3daf42-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.389555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-12-08-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.389565 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.389575 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.389585 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.389603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.389613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.389623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.389647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.389724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.389739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.389749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.389759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:06:30.389775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80247df9-6714-4677-8c78-bf7fdbf74e7f', 'scsi-SQEMU_QEMU_HARDDISK_80247df9-6714-4677-8c78-bf7fdbf74e7f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80247df9-6714-4677-8c78-bf7fdbf74e7f-part1', 'scsi-SQEMU_QEMU_HARDDISK_80247df9-6714-4677-8c78-bf7fdbf74e7f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80247df9-6714-4677-8c78-bf7fdbf74e7f-part14', 'scsi-SQEMU_QEMU_HARDDISK_80247df9-6714-4677-8c78-bf7fdbf74e7f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80247df9-6714-4677-8c78-bf7fdbf74e7f-part15', 'scsi-SQEMU_QEMU_HARDDISK_80247df9-6714-4677-8c78-bf7fdbf74e7f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80247df9-6714-4677-8c78-bf7fdbf74e7f-part16', 'scsi-SQEMU_QEMU_HARDDISK_80247df9-6714-4677-8c78-bf7fdbf74e7f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.389851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-12-08-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:06:30.389865 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.389875 | orchestrator | 2025-11-01 13:06:30.389885 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-11-01 13:06:30.389895 | orchestrator | Saturday 01 November 2025 12:54:55 +0000 (0:00:02.040) 0:00:51.364 ***** 2025-11-01 13:06:30.389906 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d83d2135--3529--5759--9738--6f5d85bcdaef-osd--block--d83d2135--3529--5759--9738--6f5d85bcdaef', 'dm-uuid-LVM-fDTpWClBjZ9Us4p8lfhtANZK4vLC820taYmvievWGuCmwhc1CUqHSku1d3Bz3JoN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.389918 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2d34deeb--c147--51f6--865b--40ba131b62ad-osd--block--2d34deeb--c147--51f6--865b--40ba131b62ad', 'dm-uuid-LVM-pdVlv3KVvcnVqMGiaDwkB46kRxQvUgkdTSSCTkG3pO8o7wfA4qNO43l3AMC7123p'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.389928 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.389945 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.389955 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390082 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390101 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390113 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390123 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390140 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390151 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--277f9d3d--0c20--556e--833f--7bea0f2408d1-osd--block--277f9d3d--0c20--556e--833f--7bea0f2408d1', 'dm-uuid-LVM-ehIfJlPiGc2uigZOsqopqiFLANEOix1Xdj3JmMubFLusgjuIjBl2BirrwsTyULbt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390286 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--780930f3--bf13--5252--a15a--5f9f469ca774-osd--block--780930f3--bf13--5252--a15a--5f9f469ca774', 'dm-uuid-LVM-r8tx29ZMlQVBBI2chiCFB5cyO1pdwz8LLdk2mRfOHl9NPN3EYVCijdNqhnTrpmNE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390305 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11', 'scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part1', 'scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part14', 'scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part15', 'scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part16', 'scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390323 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390342 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390431 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d83d2135--3529--5759--9738--6f5d85bcdaef-osd--block--d83d2135--3529--5759--9738--6f5d85bcdaef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yXwX0t-f9iC-iCcI-yN1X-tJVr-wlAT-2Kim49', 'scsi-0QEMU_QEMU_HARDDISK_6d5232a6-49c3-4ba2-8072-69b94c6f6826', 'scsi-SQEMU_QEMU_HARDDISK_6d5232a6-49c3-4ba2-8072-69b94c6f6826'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390447 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390457 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390474 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2d34deeb--c147--51f6--865b--40ba131b62ad-osd--block--2d34deeb--c147--51f6--865b--40ba131b62ad'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i4h7F2-YliL-SO9h-Csgw-3CJZ-dMHf-khJwX8', 'scsi-0QEMU_QEMU_HARDDISK_caf17145-8e33-4113-9dc7-3e1268f339ef', 'scsi-SQEMU_QEMU_HARDDISK_caf17145-8e33-4113-9dc7-3e1268f339ef'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390484 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390559 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b36ff255-a328-4794-8843-53478b92bf6f', 'scsi-SQEMU_QEMU_HARDDISK_b36ff255-a328-4794-8843-53478b92bf6f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390573 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390583 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-12-08-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390594 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390609 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390678 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df', 'scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part1', 'scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part14', 'scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part15', 'scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part16', 'scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390703 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--277f9d3d--0c20--556e--833f--7bea0f2408d1-osd--block--277f9d3d--0c20--556e--833f--7bea0f2408d1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EUQzMZ-PMCb-pPIT-wNZG-jNnZ-MeqS-hGDXl3', 'scsi-0QEMU_QEMU_HARDDISK_5ce69623-bff4-4254-af6b-7ef1616921db', 'scsi-SQEMU_QEMU_HARDDISK_5ce69623-bff4-4254-af6b-7ef1616921db'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390719 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--780930f3--bf13--5252--a15a--5f9f469ca774-osd--block--780930f3--bf13--5252--a15a--5f9f469ca774'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EP6xe2-EOUm-XIoM-j2QM-F2TH-29vI-sqP21Z', 'scsi-0QEMU_QEMU_HARDDISK_0d74391b-0b8f-495c-a577-c6c4d7ebf805', 'scsi-SQEMU_QEMU_HARDDISK_0d74391b-0b8f-495c-a577-c6c4d7ebf805'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390739 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fea132eb--9454--553c--8b4e--faa263198857-osd--block--fea132eb--9454--553c--8b4e--faa263198857', 'dm-uuid-LVM-TGwqzibQaZzWxavZ7bwpJb5Bm19fuc8dzN6cYcJaRnPY0G7QShibxCft9nmjmY1y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390808 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce385ad4-e039-43b9-b94b-c72aec6ecf03', 'scsi-SQEMU_QEMU_HARDDISK_ce385ad4-e039-43b9-b94b-c72aec6ecf03'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390822 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1e995aa1--0e3d--5a0e--8d57--e00715a81a73-osd--block--1e995aa1--0e3d--5a0e--8d57--e00715a81a73', 'dm-uuid-LVM-oMXBBs41xleAnydzHhAkrMLMr1a9xjvhy4VgzGoZt3LJiPBQovI0rmeIzq1Qo46Z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390832 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-12-08-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390842 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.390852 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390874 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390885 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390965 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390982 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.390993 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391003 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391020 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391036 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391047 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391135 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391150 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391161 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391171 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391189 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.391221 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391298 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_598994d1-b5fd-49e7-a955-1ee24af64c72', 'scsi-SQEMU_QEMU_HARDDISK_598994d1-b5fd-49e7-a955-1ee24af64c72'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_598994d1-b5fd-49e7-a955-1ee24af64c72-part1', 'scsi-SQEMU_QEMU_HARDDISK_598994d1-b5fd-49e7-a955-1ee24af64c72-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_598994d1-b5fd-49e7-a955-1ee24af64c72-part14', 'scsi-SQEMU_QEMU_HARDDISK_598994d1-b5fd-49e7-a955-1ee24af64c72-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_598994d1-b5fd-49e7-a955-1ee24af64c72-part15', 'scsi-SQEMU_QEMU_HARDDISK_598994d1-b5fd-49e7-a955-1ee24af64c72-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_598994d1-b5fd-49e7-a955-1ee24af64c72-part16', 'scsi-SQEMU_QEMU_HARDDISK_598994d1-b5fd-49e7-a955-1ee24af64c72-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391314 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-12-08-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391324 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391347 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391357 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391439 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391454 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391465 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391482 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a', 'scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part1', 'scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part14', 'scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part15', 'scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part16', 'scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391562 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391577 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391587 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391597 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--fea132eb--9454--553c--8b4e--faa263198857-osd--block--fea132eb--9454--553c--8b4e--faa263198857'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ogQA8B-7r4e-vqdN-CHaY-teu3-lKEI-fsce3l', 'scsi-0QEMU_QEMU_HARDDISK_7bcb822d-7f3b-4eca-ac83-a3c6a6b727aa', 'scsi-SQEMU_QEMU_HARDDISK_7bcb822d-7f3b-4eca-ac83-a3c6a6b727aa'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391679 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b738f3f-811f-4b8a-84ab-a2aefc3daf42', 'scsi-SQEMU_QEMU_HARDDISK_6b738f3f-811f-4b8a-84ab-a2aefc3daf42'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b738f3f-811f-4b8a-84ab-a2aefc3daf42-part1', 'scsi-SQEMU_QEMU_HARDDISK_6b738f3f-811f-4b8a-84ab-a2aefc3daf42-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b738f3f-811f-4b8a-84ab-a2aefc3daf42-part14', 'scsi-SQEMU_QEMU_HARDDISK_6b738f3f-811f-4b8a-84ab-a2aefc3daf42-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b738f3f-811f-4b8a-84ab-a2aefc3daf42-part15', 'scsi-SQEMU_QEMU_HARDDISK_6b738f3f-811f-4b8a-84ab-a2aefc3daf42-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b738f3f-811f-4b8a-84ab-a2aefc3daf42-part16', 'scsi-SQEMU_QEMU_HARDDISK_6b738f3f-811f-4b8a-84ab-a2aefc3daf42-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391695 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1e995aa1--0e3d--5a0e--8d57--e00715a81a73-osd--block--1e995aa1--0e3d--5a0e--8d57--e00715a81a73'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MqJV1c-Heey-RafL-3X6C-WDDi-I2Rp-LWSwHf', 'scsi-0QEMU_QEMU_HARDDISK_bacac2a1-f096-4371-9863-988edf40b0d8', 'scsi-SQEMU_QEMU_HARDDISK_bacac2a1-f096-4371-9863-988edf40b0d8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391712 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_79b0442c-a1d2-4926-aa81-9c91c373f6dc', 'scsi-SQEMU_QEMU_HARDDISK_79b0442c-a1d2-4926-aa81-9c91c373f6dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391756 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-12-08-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391767 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-12-08-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391778 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.391788 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.391798 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.391869 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391883 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391893 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391915 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391930 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.391953 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.392024 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.392038 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.392054 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80247df9-6714-4677-8c78-bf7fdbf74e7f', 'scsi-SQEMU_QEMU_HARDDISK_80247df9-6714-4677-8c78-bf7fdbf74e7f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80247df9-6714-4677-8c78-bf7fdbf74e7f-part1', 'scsi-SQEMU_QEMU_HARDDISK_80247df9-6714-4677-8c78-bf7fdbf74e7f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80247df9-6714-4677-8c78-bf7fdbf74e7f-part14', 'scsi-SQEMU_QEMU_HARDDISK_80247df9-6714-4677-8c78-bf7fdbf74e7f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80247df9-6714-4677-8c78-bf7fdbf74e7f-part15', 'scsi-SQEMU_QEMU_HARDDISK_80247df9-6714-4677-8c78-bf7fdbf74e7f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80247df9-6714-4677-8c78-bf7fdbf74e7f-part16', 'scsi-SQEMU_QEMU_HARDDISK_80247df9-6714-4677-8c78-bf7fdbf74e7f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.392072 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-12-08-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:06:30.392082 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.392092 | orchestrator | 2025-11-01 13:06:30.392102 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-11-01 13:06:30.392113 | orchestrator | Saturday 01 November 2025 12:54:57 +0000 (0:00:02.080) 0:00:53.445 ***** 2025-11-01 13:06:30.392180 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.392194 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.392222 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.392232 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.392242 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.392251 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.392261 | orchestrator | 2025-11-01 13:06:30.392270 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-11-01 13:06:30.392280 | orchestrator | Saturday 01 November 2025 12:55:00 +0000 (0:00:02.797) 0:00:56.242 ***** 2025-11-01 13:06:30.392303 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.392313 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.392323 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.392332 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.392342 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.392351 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.392368 | orchestrator | 2025-11-01 13:06:30.392378 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-11-01 13:06:30.392388 | orchestrator | Saturday 01 November 2025 12:55:02 +0000 (0:00:01.931) 0:00:58.173 ***** 2025-11-01 13:06:30.392397 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.392407 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.392417 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.392426 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.392436 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.392445 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.392455 | orchestrator | 2025-11-01 13:06:30.392464 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-11-01 13:06:30.392474 | orchestrator | Saturday 01 November 2025 12:55:03 +0000 (0:00:01.730) 0:00:59.904 ***** 2025-11-01 13:06:30.392484 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.392494 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.392503 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.392513 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.392522 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.392532 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.392542 | orchestrator | 2025-11-01 13:06:30.392551 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-11-01 13:06:30.392561 | orchestrator | Saturday 01 November 2025 12:55:05 +0000 (0:00:01.027) 0:01:00.932 ***** 2025-11-01 13:06:30.392571 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.392581 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.392590 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.392600 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.392610 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.392619 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.392629 | orchestrator | 2025-11-01 13:06:30.392639 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-11-01 13:06:30.392648 | orchestrator | Saturday 01 November 2025 12:55:06 +0000 (0:00:01.697) 0:01:02.629 ***** 2025-11-01 13:06:30.392658 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.392668 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.392677 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.392687 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.392697 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.392706 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.392716 | orchestrator | 2025-11-01 13:06:30.392726 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-11-01 13:06:30.392735 | orchestrator | Saturday 01 November 2025 12:55:08 +0000 (0:00:01.762) 0:01:04.392 ***** 2025-11-01 13:06:30.392745 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-11-01 13:06:30.392755 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-11-01 13:06:30.392765 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-11-01 13:06:30.392774 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-11-01 13:06:30.392789 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-11-01 13:06:30.392799 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-01 13:06:30.392810 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-11-01 13:06:30.392821 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-11-01 13:06:30.392833 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-11-01 13:06:30.392844 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-11-01 13:06:30.392855 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-11-01 13:06:30.392866 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-11-01 13:06:30.392877 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-11-01 13:06:30.392888 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-11-01 13:06:30.392899 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-11-01 13:06:30.392916 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-11-01 13:06:30.392926 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-11-01 13:06:30.392938 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-11-01 13:06:30.392949 | orchestrator | 2025-11-01 13:06:30.392960 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-11-01 13:06:30.392971 | orchestrator | Saturday 01 November 2025 12:55:14 +0000 (0:00:05.742) 0:01:10.137 ***** 2025-11-01 13:06:30.392983 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-01 13:06:30.392994 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-01 13:06:30.393005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-01 13:06:30.393016 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.393027 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-11-01 13:06:30.393039 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-11-01 13:06:30.393050 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-11-01 13:06:30.393061 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.393071 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-11-01 13:06:30.393083 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-11-01 13:06:30.393123 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-01 13:06:30.393136 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-01 13:06:30.393147 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-11-01 13:06:30.393158 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-01 13:06:30.393169 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.393178 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-11-01 13:06:30.393188 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-11-01 13:06:30.393197 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-11-01 13:06:30.393223 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.393233 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.393242 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-11-01 13:06:30.393252 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-11-01 13:06:30.393262 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-11-01 13:06:30.393271 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.393281 | orchestrator | 2025-11-01 13:06:30.393291 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-11-01 13:06:30.393301 | orchestrator | Saturday 01 November 2025 12:55:15 +0000 (0:00:01.549) 0:01:11.686 ***** 2025-11-01 13:06:30.393310 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.393320 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.393329 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.393339 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.393349 | orchestrator | 2025-11-01 13:06:30.393359 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-11-01 13:06:30.393369 | orchestrator | Saturday 01 November 2025 12:55:17 +0000 (0:00:02.183) 0:01:13.870 ***** 2025-11-01 13:06:30.393379 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.393388 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.393398 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.393408 | orchestrator | 2025-11-01 13:06:30.393417 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-11-01 13:06:30.393427 | orchestrator | Saturday 01 November 2025 12:55:18 +0000 (0:00:00.600) 0:01:14.471 ***** 2025-11-01 13:06:30.393436 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.393446 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.393462 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.393472 | orchestrator | 2025-11-01 13:06:30.393482 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-11-01 13:06:30.393491 | orchestrator | Saturday 01 November 2025 12:55:19 +0000 (0:00:00.582) 0:01:15.053 ***** 2025-11-01 13:06:30.393501 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.393510 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.393520 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.393530 | orchestrator | 2025-11-01 13:06:30.393539 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-11-01 13:06:30.393549 | orchestrator | Saturday 01 November 2025 12:55:19 +0000 (0:00:00.582) 0:01:15.635 ***** 2025-11-01 13:06:30.393558 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.393568 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.393578 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.393587 | orchestrator | 2025-11-01 13:06:30.393597 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-11-01 13:06:30.393606 | orchestrator | Saturday 01 November 2025 12:55:20 +0000 (0:00:00.561) 0:01:16.196 ***** 2025-11-01 13:06:30.393616 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 13:06:30.393630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 13:06:30.393640 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 13:06:30.393649 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.393659 | orchestrator | 2025-11-01 13:06:30.393668 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-11-01 13:06:30.393678 | orchestrator | Saturday 01 November 2025 12:55:20 +0000 (0:00:00.576) 0:01:16.773 ***** 2025-11-01 13:06:30.393688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 13:06:30.393697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 13:06:30.393706 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 13:06:30.393716 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.393725 | orchestrator | 2025-11-01 13:06:30.393735 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-11-01 13:06:30.393744 | orchestrator | Saturday 01 November 2025 12:55:21 +0000 (0:00:00.700) 0:01:17.473 ***** 2025-11-01 13:06:30.393754 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 13:06:30.393763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 13:06:30.393773 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 13:06:30.393783 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.393792 | orchestrator | 2025-11-01 13:06:30.393802 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-11-01 13:06:30.393811 | orchestrator | Saturday 01 November 2025 12:55:22 +0000 (0:00:00.755) 0:01:18.229 ***** 2025-11-01 13:06:30.393821 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.393830 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.393840 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.393849 | orchestrator | 2025-11-01 13:06:30.393859 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-11-01 13:06:30.393868 | orchestrator | Saturday 01 November 2025 12:55:22 +0000 (0:00:00.464) 0:01:18.694 ***** 2025-11-01 13:06:30.393878 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-11-01 13:06:30.393887 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-11-01 13:06:30.393897 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-11-01 13:06:30.393906 | orchestrator | 2025-11-01 13:06:30.393941 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-11-01 13:06:30.393952 | orchestrator | Saturday 01 November 2025 12:55:24 +0000 (0:00:01.584) 0:01:20.278 ***** 2025-11-01 13:06:30.393962 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-01 13:06:30.393972 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-01 13:06:30.393988 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-01 13:06:30.393998 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-11-01 13:06:30.394008 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-11-01 13:06:30.394043 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-11-01 13:06:30.394055 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-11-01 13:06:30.394065 | orchestrator | 2025-11-01 13:06:30.394074 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-11-01 13:06:30.394084 | orchestrator | Saturday 01 November 2025 12:55:25 +0000 (0:00:00.933) 0:01:21.211 ***** 2025-11-01 13:06:30.394093 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-01 13:06:30.394103 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-01 13:06:30.394112 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-01 13:06:30.394122 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-11-01 13:06:30.394132 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-11-01 13:06:30.394141 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-11-01 13:06:30.394150 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-11-01 13:06:30.394160 | orchestrator | 2025-11-01 13:06:30.394169 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-01 13:06:30.394179 | orchestrator | Saturday 01 November 2025 12:55:29 +0000 (0:00:04.254) 0:01:25.466 ***** 2025-11-01 13:06:30.394189 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.394250 | orchestrator | 2025-11-01 13:06:30.394262 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-01 13:06:30.394272 | orchestrator | Saturday 01 November 2025 12:55:31 +0000 (0:00:02.293) 0:01:27.759 ***** 2025-11-01 13:06:30.394282 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.394291 | orchestrator | 2025-11-01 13:06:30.394301 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-01 13:06:30.394311 | orchestrator | Saturday 01 November 2025 12:55:34 +0000 (0:00:02.288) 0:01:30.048 ***** 2025-11-01 13:06:30.394320 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.394330 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.394340 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.394349 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.394359 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.394369 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.394378 | orchestrator | 2025-11-01 13:06:30.394393 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-01 13:06:30.394403 | orchestrator | Saturday 01 November 2025 12:55:37 +0000 (0:00:02.922) 0:01:32.971 ***** 2025-11-01 13:06:30.394413 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.394423 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.394432 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.394441 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.394449 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.394456 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.394464 | orchestrator | 2025-11-01 13:06:30.394472 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-01 13:06:30.394480 | orchestrator | Saturday 01 November 2025 12:55:39 +0000 (0:00:02.025) 0:01:34.997 ***** 2025-11-01 13:06:30.394494 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.394502 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.394509 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.394517 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.394525 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.394533 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.394540 | orchestrator | 2025-11-01 13:06:30.394548 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-01 13:06:30.394556 | orchestrator | Saturday 01 November 2025 12:55:40 +0000 (0:00:01.769) 0:01:36.766 ***** 2025-11-01 13:06:30.394564 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.394572 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.394580 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.394587 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.394595 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.394603 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.394611 | orchestrator | 2025-11-01 13:06:30.394618 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-01 13:06:30.394626 | orchestrator | Saturday 01 November 2025 12:55:42 +0000 (0:00:01.625) 0:01:38.392 ***** 2025-11-01 13:06:30.394634 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.394642 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.394650 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.394657 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.394665 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.394673 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.394681 | orchestrator | 2025-11-01 13:06:30.394689 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-01 13:06:30.394722 | orchestrator | Saturday 01 November 2025 12:55:44 +0000 (0:00:01.585) 0:01:39.978 ***** 2025-11-01 13:06:30.394732 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.394740 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.394748 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.394755 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.394763 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.394771 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.394779 | orchestrator | 2025-11-01 13:06:30.394787 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-01 13:06:30.394795 | orchestrator | Saturday 01 November 2025 12:55:44 +0000 (0:00:00.733) 0:01:40.711 ***** 2025-11-01 13:06:30.394802 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.394810 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.394818 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.394826 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.394833 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.394841 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.394849 | orchestrator | 2025-11-01 13:06:30.394857 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-01 13:06:30.394864 | orchestrator | Saturday 01 November 2025 12:55:45 +0000 (0:00:01.005) 0:01:41.717 ***** 2025-11-01 13:06:30.394872 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.394880 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.394888 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.394896 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.394903 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.394911 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.394919 | orchestrator | 2025-11-01 13:06:30.394927 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-01 13:06:30.394934 | orchestrator | Saturday 01 November 2025 12:55:47 +0000 (0:00:01.297) 0:01:43.014 ***** 2025-11-01 13:06:30.394942 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.394950 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.394958 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.394965 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.394979 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.394986 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.394994 | orchestrator | 2025-11-01 13:06:30.395002 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-01 13:06:30.395010 | orchestrator | Saturday 01 November 2025 12:55:48 +0000 (0:00:01.601) 0:01:44.616 ***** 2025-11-01 13:06:30.395018 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.395025 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.395033 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.395041 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.395049 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.395056 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.395064 | orchestrator | 2025-11-01 13:06:30.395072 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-01 13:06:30.395080 | orchestrator | Saturday 01 November 2025 12:55:49 +0000 (0:00:00.683) 0:01:45.299 ***** 2025-11-01 13:06:30.395088 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.395095 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.395103 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.395111 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.395119 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.395127 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.395134 | orchestrator | 2025-11-01 13:06:30.395142 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-01 13:06:30.395150 | orchestrator | Saturday 01 November 2025 12:55:50 +0000 (0:00:00.966) 0:01:46.265 ***** 2025-11-01 13:06:30.395158 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.395166 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.395174 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.395181 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.395189 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.395197 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.395220 | orchestrator | 2025-11-01 13:06:30.395228 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-01 13:06:30.395236 | orchestrator | Saturday 01 November 2025 12:55:51 +0000 (0:00:00.779) 0:01:47.044 ***** 2025-11-01 13:06:30.395244 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.395252 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.395260 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.395268 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.395276 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.395284 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.395291 | orchestrator | 2025-11-01 13:06:30.395299 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-01 13:06:30.395307 | orchestrator | Saturday 01 November 2025 12:55:52 +0000 (0:00:01.043) 0:01:48.088 ***** 2025-11-01 13:06:30.395315 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.395323 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.395331 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.395339 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.395347 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.395355 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.395362 | orchestrator | 2025-11-01 13:06:30.395370 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-01 13:06:30.395378 | orchestrator | Saturday 01 November 2025 12:55:52 +0000 (0:00:00.775) 0:01:48.864 ***** 2025-11-01 13:06:30.395386 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.395394 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.395401 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.395409 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.395417 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.395425 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.395432 | orchestrator | 2025-11-01 13:06:30.395440 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-01 13:06:30.395517 | orchestrator | Saturday 01 November 2025 12:55:53 +0000 (0:00:00.977) 0:01:49.842 ***** 2025-11-01 13:06:30.395534 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.395542 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.395550 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.395558 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.395565 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.395573 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.395581 | orchestrator | 2025-11-01 13:06:30.395615 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-01 13:06:30.395624 | orchestrator | Saturday 01 November 2025 12:55:54 +0000 (0:00:00.643) 0:01:50.485 ***** 2025-11-01 13:06:30.395632 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.395640 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.395648 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.395656 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.395664 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.395672 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.395679 | orchestrator | 2025-11-01 13:06:30.395687 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-01 13:06:30.395695 | orchestrator | Saturday 01 November 2025 12:55:55 +0000 (0:00:01.005) 0:01:51.490 ***** 2025-11-01 13:06:30.395703 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.395711 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.395719 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.395727 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.395734 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.395742 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.395750 | orchestrator | 2025-11-01 13:06:30.395758 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-01 13:06:30.395765 | orchestrator | Saturday 01 November 2025 12:55:56 +0000 (0:00:00.824) 0:01:52.315 ***** 2025-11-01 13:06:30.395773 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.395781 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.395789 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.395796 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.395804 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.395812 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.395820 | orchestrator | 2025-11-01 13:06:30.395828 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-11-01 13:06:30.395835 | orchestrator | Saturday 01 November 2025 12:55:57 +0000 (0:00:01.447) 0:01:53.763 ***** 2025-11-01 13:06:30.395843 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.395851 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.395859 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.395867 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.395875 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.395882 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.395890 | orchestrator | 2025-11-01 13:06:30.395898 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-11-01 13:06:30.395906 | orchestrator | Saturday 01 November 2025 12:55:59 +0000 (0:00:01.659) 0:01:55.423 ***** 2025-11-01 13:06:30.395914 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.395922 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.395930 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.395937 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.395945 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.395953 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.395961 | orchestrator | 2025-11-01 13:06:30.395969 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-11-01 13:06:30.395977 | orchestrator | Saturday 01 November 2025 12:56:02 +0000 (0:00:02.825) 0:01:58.248 ***** 2025-11-01 13:06:30.395985 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.395998 | orchestrator | 2025-11-01 13:06:30.396006 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-11-01 13:06:30.396014 | orchestrator | Saturday 01 November 2025 12:56:03 +0000 (0:00:01.270) 0:01:59.519 ***** 2025-11-01 13:06:30.396022 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.396030 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.396038 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.396045 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.396053 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.396064 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.396073 | orchestrator | 2025-11-01 13:06:30.396081 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-11-01 13:06:30.396088 | orchestrator | Saturday 01 November 2025 12:56:04 +0000 (0:00:00.561) 0:02:00.081 ***** 2025-11-01 13:06:30.396096 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.396104 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.396112 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.396120 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.396127 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.396135 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.396143 | orchestrator | 2025-11-01 13:06:30.396151 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-11-01 13:06:30.396159 | orchestrator | Saturday 01 November 2025 12:56:04 +0000 (0:00:00.730) 0:02:00.811 ***** 2025-11-01 13:06:30.396167 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-01 13:06:30.396175 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-01 13:06:30.396183 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-01 13:06:30.396191 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-01 13:06:30.396212 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-01 13:06:30.396220 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-01 13:06:30.396228 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-11-01 13:06:30.396236 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-01 13:06:30.396244 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-01 13:06:30.396252 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-01 13:06:30.396260 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-01 13:06:30.396289 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-11-01 13:06:30.396298 | orchestrator | 2025-11-01 13:06:30.396306 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-11-01 13:06:30.396314 | orchestrator | Saturday 01 November 2025 12:56:06 +0000 (0:00:01.352) 0:02:02.164 ***** 2025-11-01 13:06:30.396322 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.396330 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.396338 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.396346 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.396354 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.396362 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.396369 | orchestrator | 2025-11-01 13:06:30.396377 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-11-01 13:06:30.396385 | orchestrator | Saturday 01 November 2025 12:56:07 +0000 (0:00:01.286) 0:02:03.450 ***** 2025-11-01 13:06:30.396393 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.396401 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.396409 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.396422 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.396430 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.396438 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.396446 | orchestrator | 2025-11-01 13:06:30.396453 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-11-01 13:06:30.396462 | orchestrator | Saturday 01 November 2025 12:56:08 +0000 (0:00:00.593) 0:02:04.044 ***** 2025-11-01 13:06:30.396470 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.396478 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.396485 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.396493 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.396501 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.396509 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.396517 | orchestrator | 2025-11-01 13:06:30.396525 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-11-01 13:06:30.396533 | orchestrator | Saturday 01 November 2025 12:56:09 +0000 (0:00:00.945) 0:02:04.989 ***** 2025-11-01 13:06:30.396541 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.396548 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.396556 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.396564 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.396572 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.396580 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.396587 | orchestrator | 2025-11-01 13:06:30.396595 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-11-01 13:06:30.396603 | orchestrator | Saturday 01 November 2025 12:56:09 +0000 (0:00:00.637) 0:02:05.627 ***** 2025-11-01 13:06:30.396611 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.396619 | orchestrator | 2025-11-01 13:06:30.396628 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-11-01 13:06:30.396636 | orchestrator | Saturday 01 November 2025 12:56:11 +0000 (0:00:01.367) 0:02:06.995 ***** 2025-11-01 13:06:30.396644 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.396652 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.396660 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.396668 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.396675 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.396683 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.396691 | orchestrator | 2025-11-01 13:06:30.396699 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-11-01 13:06:30.396707 | orchestrator | Saturday 01 November 2025 12:56:58 +0000 (0:00:47.234) 0:02:54.229 ***** 2025-11-01 13:06:30.396719 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-01 13:06:30.396727 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-01 13:06:30.396735 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-01 13:06:30.396743 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.396751 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-01 13:06:30.396759 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-01 13:06:30.396767 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-01 13:06:30.396775 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.396783 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-01 13:06:30.396791 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-01 13:06:30.396799 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-01 13:06:30.396806 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.396815 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-01 13:06:30.396827 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-01 13:06:30.396835 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-01 13:06:30.396843 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.396851 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-01 13:06:30.396859 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-01 13:06:30.396867 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-01 13:06:30.396875 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.396883 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-11-01 13:06:30.396911 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-11-01 13:06:30.396920 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-11-01 13:06:30.396928 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.396936 | orchestrator | 2025-11-01 13:06:30.396944 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-11-01 13:06:30.396952 | orchestrator | Saturday 01 November 2025 12:56:59 +0000 (0:00:00.761) 0:02:54.991 ***** 2025-11-01 13:06:30.396960 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.396968 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.396976 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.396984 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.396992 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.396999 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.397007 | orchestrator | 2025-11-01 13:06:30.397015 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-11-01 13:06:30.397023 | orchestrator | Saturday 01 November 2025 12:56:59 +0000 (0:00:00.886) 0:02:55.877 ***** 2025-11-01 13:06:30.397031 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.397039 | orchestrator | 2025-11-01 13:06:30.397046 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-11-01 13:06:30.397054 | orchestrator | Saturday 01 November 2025 12:57:00 +0000 (0:00:00.192) 0:02:56.069 ***** 2025-11-01 13:06:30.397062 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.397070 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.397078 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.397086 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.397094 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.397101 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.397109 | orchestrator | 2025-11-01 13:06:30.397117 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-11-01 13:06:30.397125 | orchestrator | Saturday 01 November 2025 12:57:01 +0000 (0:00:00.991) 0:02:57.061 ***** 2025-11-01 13:06:30.397133 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.397141 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.397149 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.397157 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.397165 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.397173 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.397180 | orchestrator | 2025-11-01 13:06:30.397188 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-11-01 13:06:30.397196 | orchestrator | Saturday 01 November 2025 12:57:02 +0000 (0:00:01.043) 0:02:58.105 ***** 2025-11-01 13:06:30.397242 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.397250 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.397258 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.397266 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.397273 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.397281 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.397289 | orchestrator | 2025-11-01 13:06:30.397301 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-11-01 13:06:30.397309 | orchestrator | Saturday 01 November 2025 12:57:02 +0000 (0:00:00.748) 0:02:58.853 ***** 2025-11-01 13:06:30.397317 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.397325 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.397333 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.397339 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.397346 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.397352 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.397359 | orchestrator | 2025-11-01 13:06:30.397366 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-11-01 13:06:30.397372 | orchestrator | Saturday 01 November 2025 12:57:05 +0000 (0:00:02.648) 0:03:01.502 ***** 2025-11-01 13:06:30.397379 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.397385 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.397392 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.397398 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.397408 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.397415 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.397422 | orchestrator | 2025-11-01 13:06:30.397428 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-11-01 13:06:30.397435 | orchestrator | Saturday 01 November 2025 12:57:06 +0000 (0:00:00.770) 0:03:02.272 ***** 2025-11-01 13:06:30.397442 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.397449 | orchestrator | 2025-11-01 13:06:30.397456 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-11-01 13:06:30.397462 | orchestrator | Saturday 01 November 2025 12:57:07 +0000 (0:00:01.590) 0:03:03.862 ***** 2025-11-01 13:06:30.397469 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.397476 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.397482 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.397489 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.397495 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.397502 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.397508 | orchestrator | 2025-11-01 13:06:30.397515 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-11-01 13:06:30.397521 | orchestrator | Saturday 01 November 2025 12:57:08 +0000 (0:00:00.978) 0:03:04.840 ***** 2025-11-01 13:06:30.397528 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.397535 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.397541 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.397547 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.397554 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.397561 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.397567 | orchestrator | 2025-11-01 13:06:30.397574 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-11-01 13:06:30.397580 | orchestrator | Saturday 01 November 2025 12:57:09 +0000 (0:00:00.840) 0:03:05.681 ***** 2025-11-01 13:06:30.397587 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.397594 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.397600 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.397607 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.397613 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.397639 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.397647 | orchestrator | 2025-11-01 13:06:30.397654 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-11-01 13:06:30.397660 | orchestrator | Saturday 01 November 2025 12:57:10 +0000 (0:00:01.044) 0:03:06.725 ***** 2025-11-01 13:06:30.397667 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.397674 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.397680 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.397687 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.397702 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.397709 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.397715 | orchestrator | 2025-11-01 13:06:30.397722 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-11-01 13:06:30.397729 | orchestrator | Saturday 01 November 2025 12:57:11 +0000 (0:00:00.753) 0:03:07.479 ***** 2025-11-01 13:06:30.397735 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.397742 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.397748 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.397755 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.397762 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.397768 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.397775 | orchestrator | 2025-11-01 13:06:30.397781 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-11-01 13:06:30.397788 | orchestrator | Saturday 01 November 2025 12:57:12 +0000 (0:00:00.935) 0:03:08.414 ***** 2025-11-01 13:06:30.397795 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.397801 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.397808 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.397815 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.397821 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.397828 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.397834 | orchestrator | 2025-11-01 13:06:30.397841 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-11-01 13:06:30.397847 | orchestrator | Saturday 01 November 2025 12:57:13 +0000 (0:00:00.631) 0:03:09.045 ***** 2025-11-01 13:06:30.397854 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.397861 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.397867 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.397874 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.397880 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.397887 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.397893 | orchestrator | 2025-11-01 13:06:30.397900 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-11-01 13:06:30.397906 | orchestrator | Saturday 01 November 2025 12:57:14 +0000 (0:00:00.910) 0:03:09.956 ***** 2025-11-01 13:06:30.397913 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.397919 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.397926 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.397932 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.397939 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.397946 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.397952 | orchestrator | 2025-11-01 13:06:30.397959 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-11-01 13:06:30.397965 | orchestrator | Saturday 01 November 2025 12:57:14 +0000 (0:00:00.767) 0:03:10.724 ***** 2025-11-01 13:06:30.397972 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.397979 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.397985 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.397992 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.397998 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.398005 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.398011 | orchestrator | 2025-11-01 13:06:30.398052 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-11-01 13:06:30.398059 | orchestrator | Saturday 01 November 2025 12:57:16 +0000 (0:00:01.459) 0:03:12.183 ***** 2025-11-01 13:06:30.398070 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4, testbed-node-3, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.398077 | orchestrator | 2025-11-01 13:06:30.398083 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-11-01 13:06:30.398090 | orchestrator | Saturday 01 November 2025 12:57:17 +0000 (0:00:01.345) 0:03:13.529 ***** 2025-11-01 13:06:30.398101 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-11-01 13:06:30.398108 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-11-01 13:06:30.398115 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-11-01 13:06:30.398122 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-11-01 13:06:30.398128 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-11-01 13:06:30.398135 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-11-01 13:06:30.398141 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-11-01 13:06:30.398148 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-11-01 13:06:30.398154 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-11-01 13:06:30.398161 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-11-01 13:06:30.398167 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-11-01 13:06:30.398174 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-11-01 13:06:30.398181 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-11-01 13:06:30.398187 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-11-01 13:06:30.398194 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-11-01 13:06:30.398213 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-11-01 13:06:30.398220 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-11-01 13:06:30.398226 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-11-01 13:06:30.398233 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-11-01 13:06:30.398240 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-11-01 13:06:30.398266 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-11-01 13:06:30.398274 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-11-01 13:06:30.398281 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-11-01 13:06:30.398287 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-11-01 13:06:30.398294 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-11-01 13:06:30.398301 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-11-01 13:06:30.398307 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-11-01 13:06:30.398314 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-11-01 13:06:30.398320 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-11-01 13:06:30.398327 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-11-01 13:06:30.398333 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-11-01 13:06:30.398340 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-11-01 13:06:30.398346 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-11-01 13:06:30.398353 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-11-01 13:06:30.398360 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-11-01 13:06:30.398366 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-11-01 13:06:30.398373 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-11-01 13:06:30.398379 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-11-01 13:06:30.398386 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-11-01 13:06:30.398393 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-11-01 13:06:30.398399 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-11-01 13:06:30.398406 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-11-01 13:06:30.398412 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-11-01 13:06:30.398419 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-11-01 13:06:30.398430 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-11-01 13:06:30.398436 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-01 13:06:30.398443 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-01 13:06:30.398450 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-11-01 13:06:30.398456 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-01 13:06:30.398463 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-11-01 13:06:30.398469 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-11-01 13:06:30.398476 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-01 13:06:30.398482 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-01 13:06:30.398489 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-01 13:06:30.398495 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-01 13:06:30.398502 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-01 13:06:30.398508 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-11-01 13:06:30.398518 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-01 13:06:30.398525 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-01 13:06:30.398531 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-01 13:06:30.398538 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-01 13:06:30.398544 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-01 13:06:30.398551 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-01 13:06:30.398558 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-11-01 13:06:30.398564 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-01 13:06:30.398571 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-01 13:06:30.398577 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-01 13:06:30.398584 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-01 13:06:30.398590 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-01 13:06:30.398597 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-11-01 13:06:30.398603 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-01 13:06:30.398610 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-01 13:06:30.398616 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-01 13:06:30.398623 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-01 13:06:30.398630 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-01 13:06:30.398636 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-11-01 13:06:30.398643 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-01 13:06:30.398666 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-01 13:06:30.398674 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-01 13:06:30.398680 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-01 13:06:30.398687 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-11-01 13:06:30.398694 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-11-01 13:06:30.398700 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-11-01 13:06:30.398707 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-11-01 13:06:30.398718 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-01 13:06:30.398725 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-01 13:06:30.398731 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-11-01 13:06:30.398738 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-11-01 13:06:30.398745 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-11-01 13:06:30.398751 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-11-01 13:06:30.398758 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-11-01 13:06:30.398765 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-11-01 13:06:30.398771 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-11-01 13:06:30.398778 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-11-01 13:06:30.398785 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-11-01 13:06:30.398791 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-11-01 13:06:30.398798 | orchestrator | 2025-11-01 13:06:30.398804 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-11-01 13:06:30.398811 | orchestrator | Saturday 01 November 2025 12:57:24 +0000 (0:00:06.998) 0:03:20.527 ***** 2025-11-01 13:06:30.398818 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.398824 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.398831 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.398838 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.398844 | orchestrator | 2025-11-01 13:06:30.398851 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-11-01 13:06:30.398857 | orchestrator | Saturday 01 November 2025 12:57:25 +0000 (0:00:01.288) 0:03:21.816 ***** 2025-11-01 13:06:30.398864 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-01 13:06:30.398871 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-01 13:06:30.398878 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-01 13:06:30.398885 | orchestrator | 2025-11-01 13:06:30.398891 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-11-01 13:06:30.398898 | orchestrator | Saturday 01 November 2025 12:57:27 +0000 (0:00:01.142) 0:03:22.959 ***** 2025-11-01 13:06:30.398904 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-01 13:06:30.398917 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-01 13:06:30.398924 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-01 13:06:30.398931 | orchestrator | 2025-11-01 13:06:30.398937 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-11-01 13:06:30.398944 | orchestrator | Saturday 01 November 2025 12:57:28 +0000 (0:00:01.568) 0:03:24.528 ***** 2025-11-01 13:06:30.398951 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.398957 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.398964 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.398970 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.398977 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.398984 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.398990 | orchestrator | 2025-11-01 13:06:30.398997 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-11-01 13:06:30.399007 | orchestrator | Saturday 01 November 2025 12:57:29 +0000 (0:00:00.895) 0:03:25.423 ***** 2025-11-01 13:06:30.399014 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.399021 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.399027 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.399034 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.399040 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.399047 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.399054 | orchestrator | 2025-11-01 13:06:30.399060 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-11-01 13:06:30.399067 | orchestrator | Saturday 01 November 2025 12:57:30 +0000 (0:00:01.442) 0:03:26.865 ***** 2025-11-01 13:06:30.399073 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.399080 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.399087 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.399093 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.399100 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.399106 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.399113 | orchestrator | 2025-11-01 13:06:30.399119 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-11-01 13:06:30.399126 | orchestrator | Saturday 01 November 2025 12:57:32 +0000 (0:00:01.200) 0:03:28.067 ***** 2025-11-01 13:06:30.399149 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.399157 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.399164 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.399170 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.399177 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.399183 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.399190 | orchestrator | 2025-11-01 13:06:30.399197 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-11-01 13:06:30.399216 | orchestrator | Saturday 01 November 2025 12:57:33 +0000 (0:00:01.154) 0:03:29.222 ***** 2025-11-01 13:06:30.399223 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.399229 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.399236 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.399243 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.399249 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.399256 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.399263 | orchestrator | 2025-11-01 13:06:30.399269 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-11-01 13:06:30.399276 | orchestrator | Saturday 01 November 2025 12:57:34 +0000 (0:00:00.866) 0:03:30.088 ***** 2025-11-01 13:06:30.399283 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.399289 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.399296 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.399302 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.399309 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.399315 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.399322 | orchestrator | 2025-11-01 13:06:30.399328 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-11-01 13:06:30.399335 | orchestrator | Saturday 01 November 2025 12:57:35 +0000 (0:00:01.242) 0:03:31.330 ***** 2025-11-01 13:06:30.399342 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.399349 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.399355 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.399362 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.399368 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.399375 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.399381 | orchestrator | 2025-11-01 13:06:30.399388 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-11-01 13:06:30.399395 | orchestrator | Saturday 01 November 2025 12:57:36 +0000 (0:00:01.114) 0:03:32.445 ***** 2025-11-01 13:06:30.399401 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.399413 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.399419 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.399426 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.399433 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.399439 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.399446 | orchestrator | 2025-11-01 13:06:30.399453 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-11-01 13:06:30.399459 | orchestrator | Saturday 01 November 2025 12:57:37 +0000 (0:00:01.299) 0:03:33.744 ***** 2025-11-01 13:06:30.399466 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.399473 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.399479 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.399486 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.399492 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.399499 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.399506 | orchestrator | 2025-11-01 13:06:30.399512 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-11-01 13:06:30.399519 | orchestrator | Saturday 01 November 2025 12:57:41 +0000 (0:00:03.599) 0:03:37.343 ***** 2025-11-01 13:06:30.399526 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.399532 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.399539 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.399546 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.399556 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.399562 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.399569 | orchestrator | 2025-11-01 13:06:30.399576 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-11-01 13:06:30.399582 | orchestrator | Saturday 01 November 2025 12:57:42 +0000 (0:00:01.144) 0:03:38.488 ***** 2025-11-01 13:06:30.399589 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.399596 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.399602 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.399609 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.399615 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.399622 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.399628 | orchestrator | 2025-11-01 13:06:30.399635 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-11-01 13:06:30.399642 | orchestrator | Saturday 01 November 2025 12:57:43 +0000 (0:00:00.862) 0:03:39.350 ***** 2025-11-01 13:06:30.399648 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.399655 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.399661 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.399668 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.399674 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.399681 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.399687 | orchestrator | 2025-11-01 13:06:30.399694 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-11-01 13:06:30.399701 | orchestrator | Saturday 01 November 2025 12:57:44 +0000 (0:00:01.185) 0:03:40.536 ***** 2025-11-01 13:06:30.399708 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-01 13:06:30.399714 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-01 13:06:30.399721 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-01 13:06:30.399728 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.399734 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.399741 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.399748 | orchestrator | 2025-11-01 13:06:30.399772 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-11-01 13:06:30.399780 | orchestrator | Saturday 01 November 2025 12:57:45 +0000 (0:00:00.988) 0:03:41.524 ***** 2025-11-01 13:06:30.399792 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-11-01 13:06:30.399801 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-11-01 13:06:30.399808 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-11-01 13:06:30.399815 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.399822 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-11-01 13:06:30.399829 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-11-01 13:06:30.399836 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-11-01 13:06:30.399842 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.399849 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.399856 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.399863 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.399869 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.399876 | orchestrator | 2025-11-01 13:06:30.399882 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-11-01 13:06:30.399889 | orchestrator | Saturday 01 November 2025 12:57:46 +0000 (0:00:01.236) 0:03:42.761 ***** 2025-11-01 13:06:30.399896 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.399902 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.399909 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.399919 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.399926 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.399933 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.399939 | orchestrator | 2025-11-01 13:06:30.399946 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-11-01 13:06:30.399953 | orchestrator | Saturday 01 November 2025 12:57:47 +0000 (0:00:00.835) 0:03:43.597 ***** 2025-11-01 13:06:30.399959 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.399966 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.399972 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.399979 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.399985 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.399992 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.399999 | orchestrator | 2025-11-01 13:06:30.400005 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-11-01 13:06:30.400012 | orchestrator | Saturday 01 November 2025 12:57:48 +0000 (0:00:01.253) 0:03:44.851 ***** 2025-11-01 13:06:30.400023 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.400030 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.400036 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.400043 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.400049 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.400056 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.400063 | orchestrator | 2025-11-01 13:06:30.400069 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-11-01 13:06:30.400076 | orchestrator | Saturday 01 November 2025 12:57:49 +0000 (0:00:00.866) 0:03:45.718 ***** 2025-11-01 13:06:30.400083 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.400089 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.400096 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.400103 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.400109 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.400116 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.400122 | orchestrator | 2025-11-01 13:06:30.400129 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-11-01 13:06:30.400136 | orchestrator | Saturday 01 November 2025 12:57:50 +0000 (0:00:01.095) 0:03:46.813 ***** 2025-11-01 13:06:30.400142 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.400166 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.400174 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.400180 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.400187 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.400194 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.400230 | orchestrator | 2025-11-01 13:06:30.400238 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-11-01 13:06:30.400245 | orchestrator | Saturday 01 November 2025 12:57:51 +0000 (0:00:00.932) 0:03:47.746 ***** 2025-11-01 13:06:30.400251 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.400258 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.400264 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.400271 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.400278 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.400284 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.400291 | orchestrator | 2025-11-01 13:06:30.400297 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-11-01 13:06:30.400304 | orchestrator | Saturday 01 November 2025 12:57:53 +0000 (0:00:01.338) 0:03:49.084 ***** 2025-11-01 13:06:30.400311 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 13:06:30.400317 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 13:06:30.400324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 13:06:30.400330 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.400337 | orchestrator | 2025-11-01 13:06:30.400344 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-11-01 13:06:30.400350 | orchestrator | Saturday 01 November 2025 12:57:53 +0000 (0:00:00.549) 0:03:49.634 ***** 2025-11-01 13:06:30.400357 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 13:06:30.400363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 13:06:30.400370 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 13:06:30.400377 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.400383 | orchestrator | 2025-11-01 13:06:30.400390 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-11-01 13:06:30.400397 | orchestrator | Saturday 01 November 2025 12:57:54 +0000 (0:00:00.445) 0:03:50.080 ***** 2025-11-01 13:06:30.400403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 13:06:30.400410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 13:06:30.400416 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 13:06:30.400428 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.400434 | orchestrator | 2025-11-01 13:06:30.400441 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-11-01 13:06:30.400448 | orchestrator | Saturday 01 November 2025 12:57:54 +0000 (0:00:00.480) 0:03:50.561 ***** 2025-11-01 13:06:30.400454 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.400461 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.400468 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.400474 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.400481 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.400487 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.400494 | orchestrator | 2025-11-01 13:06:30.400500 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-11-01 13:06:30.400507 | orchestrator | Saturday 01 November 2025 12:57:55 +0000 (0:00:00.968) 0:03:51.529 ***** 2025-11-01 13:06:30.400514 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-11-01 13:06:30.400521 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-11-01 13:06:30.400527 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-11-01 13:06:30.400534 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-11-01 13:06:30.400540 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.400547 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-11-01 13:06:30.400557 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.400564 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-11-01 13:06:30.400570 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.400577 | orchestrator | 2025-11-01 13:06:30.400583 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-11-01 13:06:30.400589 | orchestrator | Saturday 01 November 2025 12:57:58 +0000 (0:00:02.842) 0:03:54.372 ***** 2025-11-01 13:06:30.400595 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.400601 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.400607 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.400613 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.400620 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.400626 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.400632 | orchestrator | 2025-11-01 13:06:30.400638 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-01 13:06:30.400644 | orchestrator | Saturday 01 November 2025 12:58:02 +0000 (0:00:04.497) 0:03:58.870 ***** 2025-11-01 13:06:30.400650 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.400656 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.400662 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.400668 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.400674 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.400680 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.400686 | orchestrator | 2025-11-01 13:06:30.400693 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-11-01 13:06:30.400699 | orchestrator | Saturday 01 November 2025 12:58:04 +0000 (0:00:01.786) 0:04:00.656 ***** 2025-11-01 13:06:30.400705 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.400711 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.400717 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.400723 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.400729 | orchestrator | 2025-11-01 13:06:30.400736 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-11-01 13:06:30.400742 | orchestrator | Saturday 01 November 2025 12:58:06 +0000 (0:00:01.501) 0:04:02.158 ***** 2025-11-01 13:06:30.400748 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.400754 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.400760 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.400766 | orchestrator | 2025-11-01 13:06:30.400791 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-11-01 13:06:30.400798 | orchestrator | Saturday 01 November 2025 12:58:06 +0000 (0:00:00.646) 0:04:02.805 ***** 2025-11-01 13:06:30.400808 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.400815 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.400821 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.400827 | orchestrator | 2025-11-01 13:06:30.400833 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-11-01 13:06:30.400839 | orchestrator | Saturday 01 November 2025 12:58:08 +0000 (0:00:01.817) 0:04:04.623 ***** 2025-11-01 13:06:30.400845 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-01 13:06:30.400851 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-01 13:06:30.400857 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-01 13:06:30.400863 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.400870 | orchestrator | 2025-11-01 13:06:30.400876 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-11-01 13:06:30.400882 | orchestrator | Saturday 01 November 2025 12:58:09 +0000 (0:00:00.829) 0:04:05.452 ***** 2025-11-01 13:06:30.400888 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.400894 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.400900 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.400906 | orchestrator | 2025-11-01 13:06:30.400912 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-11-01 13:06:30.400919 | orchestrator | Saturday 01 November 2025 12:58:09 +0000 (0:00:00.395) 0:04:05.848 ***** 2025-11-01 13:06:30.400925 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.400931 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.400937 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.400943 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.400949 | orchestrator | 2025-11-01 13:06:30.400955 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-11-01 13:06:30.400962 | orchestrator | Saturday 01 November 2025 12:58:11 +0000 (0:00:01.206) 0:04:07.054 ***** 2025-11-01 13:06:30.400968 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 13:06:30.400974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 13:06:30.400980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 13:06:30.400986 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.400992 | orchestrator | 2025-11-01 13:06:30.400998 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-11-01 13:06:30.401004 | orchestrator | Saturday 01 November 2025 12:58:11 +0000 (0:00:00.451) 0:04:07.506 ***** 2025-11-01 13:06:30.401011 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.401017 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.401023 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.401029 | orchestrator | 2025-11-01 13:06:30.401035 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-11-01 13:06:30.401041 | orchestrator | Saturday 01 November 2025 12:58:11 +0000 (0:00:00.407) 0:04:07.913 ***** 2025-11-01 13:06:30.401048 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.401054 | orchestrator | 2025-11-01 13:06:30.401060 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-11-01 13:06:30.401066 | orchestrator | Saturday 01 November 2025 12:58:12 +0000 (0:00:00.254) 0:04:08.168 ***** 2025-11-01 13:06:30.401072 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.401078 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.401084 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.401090 | orchestrator | 2025-11-01 13:06:30.401096 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-11-01 13:06:30.401106 | orchestrator | Saturday 01 November 2025 12:58:12 +0000 (0:00:00.372) 0:04:08.540 ***** 2025-11-01 13:06:30.401112 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.401118 | orchestrator | 2025-11-01 13:06:30.401128 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-11-01 13:06:30.401134 | orchestrator | Saturday 01 November 2025 12:58:12 +0000 (0:00:00.274) 0:04:08.814 ***** 2025-11-01 13:06:30.401140 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.401147 | orchestrator | 2025-11-01 13:06:30.401153 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-11-01 13:06:30.401159 | orchestrator | Saturday 01 November 2025 12:58:13 +0000 (0:00:00.242) 0:04:09.056 ***** 2025-11-01 13:06:30.401165 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.401171 | orchestrator | 2025-11-01 13:06:30.401177 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-11-01 13:06:30.401184 | orchestrator | Saturday 01 November 2025 12:58:13 +0000 (0:00:00.124) 0:04:09.181 ***** 2025-11-01 13:06:30.401190 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.401196 | orchestrator | 2025-11-01 13:06:30.401214 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-11-01 13:06:30.401220 | orchestrator | Saturday 01 November 2025 12:58:14 +0000 (0:00:00.921) 0:04:10.102 ***** 2025-11-01 13:06:30.401226 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.401232 | orchestrator | 2025-11-01 13:06:30.401239 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-11-01 13:06:30.401245 | orchestrator | Saturday 01 November 2025 12:58:14 +0000 (0:00:00.270) 0:04:10.372 ***** 2025-11-01 13:06:30.401251 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 13:06:30.401257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 13:06:30.401263 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 13:06:30.401270 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.401276 | orchestrator | 2025-11-01 13:06:30.401282 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-11-01 13:06:30.401288 | orchestrator | Saturday 01 November 2025 12:58:14 +0000 (0:00:00.502) 0:04:10.874 ***** 2025-11-01 13:06:30.401294 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.401317 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.401324 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.401330 | orchestrator | 2025-11-01 13:06:30.401337 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-11-01 13:06:30.401343 | orchestrator | Saturday 01 November 2025 12:58:15 +0000 (0:00:00.412) 0:04:11.287 ***** 2025-11-01 13:06:30.401349 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.401355 | orchestrator | 2025-11-01 13:06:30.401361 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-11-01 13:06:30.401367 | orchestrator | Saturday 01 November 2025 12:58:15 +0000 (0:00:00.267) 0:04:11.554 ***** 2025-11-01 13:06:30.401374 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.401380 | orchestrator | 2025-11-01 13:06:30.401386 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-11-01 13:06:30.401392 | orchestrator | Saturday 01 November 2025 12:58:15 +0000 (0:00:00.254) 0:04:11.809 ***** 2025-11-01 13:06:30.401398 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.401404 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.401410 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.401417 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.401423 | orchestrator | 2025-11-01 13:06:30.401429 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-11-01 13:06:30.401435 | orchestrator | Saturday 01 November 2025 12:58:17 +0000 (0:00:01.355) 0:04:13.164 ***** 2025-11-01 13:06:30.401441 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.401448 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.401454 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.401460 | orchestrator | 2025-11-01 13:06:30.401466 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-11-01 13:06:30.401476 | orchestrator | Saturday 01 November 2025 12:58:17 +0000 (0:00:00.408) 0:04:13.573 ***** 2025-11-01 13:06:30.401483 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.401489 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.401495 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.401501 | orchestrator | 2025-11-01 13:06:30.401507 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-11-01 13:06:30.401513 | orchestrator | Saturday 01 November 2025 12:58:18 +0000 (0:00:01.263) 0:04:14.836 ***** 2025-11-01 13:06:30.401520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 13:06:30.401526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 13:06:30.401532 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 13:06:30.401538 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.401544 | orchestrator | 2025-11-01 13:06:30.401550 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-11-01 13:06:30.401557 | orchestrator | Saturday 01 November 2025 12:58:19 +0000 (0:00:00.980) 0:04:15.817 ***** 2025-11-01 13:06:30.401563 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.401569 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.401575 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.401581 | orchestrator | 2025-11-01 13:06:30.401587 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-11-01 13:06:30.401593 | orchestrator | Saturday 01 November 2025 12:58:20 +0000 (0:00:00.691) 0:04:16.509 ***** 2025-11-01 13:06:30.401600 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.401606 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.401612 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.401618 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.401624 | orchestrator | 2025-11-01 13:06:30.401630 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-11-01 13:06:30.401640 | orchestrator | Saturday 01 November 2025 12:58:21 +0000 (0:00:00.976) 0:04:17.485 ***** 2025-11-01 13:06:30.401646 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.401652 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.401658 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.401664 | orchestrator | 2025-11-01 13:06:30.401670 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-11-01 13:06:30.401677 | orchestrator | Saturday 01 November 2025 12:58:22 +0000 (0:00:00.707) 0:04:18.192 ***** 2025-11-01 13:06:30.401683 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.401689 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.401695 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.401701 | orchestrator | 2025-11-01 13:06:30.401707 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-11-01 13:06:30.401714 | orchestrator | Saturday 01 November 2025 12:58:23 +0000 (0:00:01.320) 0:04:19.512 ***** 2025-11-01 13:06:30.401720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 13:06:30.401726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 13:06:30.401732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 13:06:30.401738 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.401744 | orchestrator | 2025-11-01 13:06:30.401750 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-11-01 13:06:30.401756 | orchestrator | Saturday 01 November 2025 12:58:24 +0000 (0:00:00.730) 0:04:20.243 ***** 2025-11-01 13:06:30.401762 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.401768 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.401774 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.401780 | orchestrator | 2025-11-01 13:06:30.401787 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-11-01 13:06:30.401793 | orchestrator | Saturday 01 November 2025 12:58:24 +0000 (0:00:00.450) 0:04:20.694 ***** 2025-11-01 13:06:30.401804 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.401810 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.401816 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.401822 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.401828 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.401834 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.401840 | orchestrator | 2025-11-01 13:06:30.401847 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-11-01 13:06:30.401868 | orchestrator | Saturday 01 November 2025 12:58:25 +0000 (0:00:01.172) 0:04:21.867 ***** 2025-11-01 13:06:30.401875 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.401882 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.401888 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.401894 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.401900 | orchestrator | 2025-11-01 13:06:30.401906 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-11-01 13:06:30.401912 | orchestrator | Saturday 01 November 2025 12:58:26 +0000 (0:00:00.972) 0:04:22.839 ***** 2025-11-01 13:06:30.401919 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.401925 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.401931 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.401937 | orchestrator | 2025-11-01 13:06:30.401943 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-11-01 13:06:30.401949 | orchestrator | Saturday 01 November 2025 12:58:27 +0000 (0:00:00.598) 0:04:23.437 ***** 2025-11-01 13:06:30.401955 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.401961 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.401968 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.401974 | orchestrator | 2025-11-01 13:06:30.401980 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-11-01 13:06:30.401986 | orchestrator | Saturday 01 November 2025 12:58:28 +0000 (0:00:01.304) 0:04:24.742 ***** 2025-11-01 13:06:30.401992 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-01 13:06:30.401998 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-01 13:06:30.402004 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-01 13:06:30.402010 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.402035 | orchestrator | 2025-11-01 13:06:30.402041 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-11-01 13:06:30.402048 | orchestrator | Saturday 01 November 2025 12:58:29 +0000 (0:00:00.674) 0:04:25.417 ***** 2025-11-01 13:06:30.402054 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.402060 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.402066 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.402072 | orchestrator | 2025-11-01 13:06:30.402078 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-11-01 13:06:30.402085 | orchestrator | 2025-11-01 13:06:30.402091 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-01 13:06:30.402097 | orchestrator | Saturday 01 November 2025 12:58:30 +0000 (0:00:01.006) 0:04:26.424 ***** 2025-11-01 13:06:30.402103 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.402109 | orchestrator | 2025-11-01 13:06:30.402116 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-01 13:06:30.402122 | orchestrator | Saturday 01 November 2025 12:58:31 +0000 (0:00:00.598) 0:04:27.023 ***** 2025-11-01 13:06:30.402128 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.402134 | orchestrator | 2025-11-01 13:06:30.402140 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-01 13:06:30.402147 | orchestrator | Saturday 01 November 2025 12:58:31 +0000 (0:00:00.555) 0:04:27.578 ***** 2025-11-01 13:06:30.402160 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.402166 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.402172 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.402178 | orchestrator | 2025-11-01 13:06:30.402185 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-01 13:06:30.402194 | orchestrator | Saturday 01 November 2025 12:58:32 +0000 (0:00:01.196) 0:04:28.775 ***** 2025-11-01 13:06:30.402230 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.402237 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.402243 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.402249 | orchestrator | 2025-11-01 13:06:30.402255 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-01 13:06:30.402262 | orchestrator | Saturday 01 November 2025 12:58:33 +0000 (0:00:00.492) 0:04:29.268 ***** 2025-11-01 13:06:30.402268 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.402274 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.402280 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.402286 | orchestrator | 2025-11-01 13:06:30.402292 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-01 13:06:30.402299 | orchestrator | Saturday 01 November 2025 12:58:33 +0000 (0:00:00.395) 0:04:29.664 ***** 2025-11-01 13:06:30.402305 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.402311 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.402317 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.402323 | orchestrator | 2025-11-01 13:06:30.402329 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-01 13:06:30.402336 | orchestrator | Saturday 01 November 2025 12:58:34 +0000 (0:00:00.338) 0:04:30.003 ***** 2025-11-01 13:06:30.402342 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.402348 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.402354 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.402360 | orchestrator | 2025-11-01 13:06:30.402366 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-01 13:06:30.402373 | orchestrator | Saturday 01 November 2025 12:58:35 +0000 (0:00:01.135) 0:04:31.138 ***** 2025-11-01 13:06:30.402379 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.402385 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.402391 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.402397 | orchestrator | 2025-11-01 13:06:30.402403 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-01 13:06:30.402409 | orchestrator | Saturday 01 November 2025 12:58:35 +0000 (0:00:00.394) 0:04:31.533 ***** 2025-11-01 13:06:30.402416 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.402422 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.402428 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.402434 | orchestrator | 2025-11-01 13:06:30.402459 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-01 13:06:30.402466 | orchestrator | Saturday 01 November 2025 12:58:36 +0000 (0:00:00.413) 0:04:31.946 ***** 2025-11-01 13:06:30.402472 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.402478 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.402484 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.402490 | orchestrator | 2025-11-01 13:06:30.402497 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-01 13:06:30.402503 | orchestrator | Saturday 01 November 2025 12:58:36 +0000 (0:00:00.850) 0:04:32.797 ***** 2025-11-01 13:06:30.402509 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.402515 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.402521 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.402527 | orchestrator | 2025-11-01 13:06:30.402533 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-01 13:06:30.402540 | orchestrator | Saturday 01 November 2025 12:58:38 +0000 (0:00:01.200) 0:04:33.998 ***** 2025-11-01 13:06:30.402546 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.402552 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.402563 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.402569 | orchestrator | 2025-11-01 13:06:30.402575 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-01 13:06:30.402581 | orchestrator | Saturday 01 November 2025 12:58:38 +0000 (0:00:00.471) 0:04:34.470 ***** 2025-11-01 13:06:30.402588 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.402594 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.402600 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.402606 | orchestrator | 2025-11-01 13:06:30.402612 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-01 13:06:30.402618 | orchestrator | Saturday 01 November 2025 12:58:38 +0000 (0:00:00.417) 0:04:34.888 ***** 2025-11-01 13:06:30.402625 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.402631 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.402637 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.402643 | orchestrator | 2025-11-01 13:06:30.402649 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-01 13:06:30.402655 | orchestrator | Saturday 01 November 2025 12:58:39 +0000 (0:00:00.359) 0:04:35.247 ***** 2025-11-01 13:06:30.402662 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.402668 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.402674 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.402680 | orchestrator | 2025-11-01 13:06:30.402686 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-01 13:06:30.402693 | orchestrator | Saturday 01 November 2025 12:58:39 +0000 (0:00:00.652) 0:04:35.899 ***** 2025-11-01 13:06:30.402699 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.402705 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.402711 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.402717 | orchestrator | 2025-11-01 13:06:30.402723 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-01 13:06:30.402730 | orchestrator | Saturday 01 November 2025 12:58:40 +0000 (0:00:00.359) 0:04:36.259 ***** 2025-11-01 13:06:30.402736 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.402742 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.402748 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.402754 | orchestrator | 2025-11-01 13:06:30.402760 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-01 13:06:30.402765 | orchestrator | Saturday 01 November 2025 12:58:40 +0000 (0:00:00.341) 0:04:36.601 ***** 2025-11-01 13:06:30.402771 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.402776 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.402781 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.402787 | orchestrator | 2025-11-01 13:06:30.402792 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-01 13:06:30.402801 | orchestrator | Saturday 01 November 2025 12:58:41 +0000 (0:00:00.338) 0:04:36.939 ***** 2025-11-01 13:06:30.402806 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.402812 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.402817 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.402822 | orchestrator | 2025-11-01 13:06:30.402828 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-01 13:06:30.402833 | orchestrator | Saturday 01 November 2025 12:58:41 +0000 (0:00:00.394) 0:04:37.333 ***** 2025-11-01 13:06:30.402839 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.402844 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.402849 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.402855 | orchestrator | 2025-11-01 13:06:30.402860 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-01 13:06:30.402865 | orchestrator | Saturday 01 November 2025 12:58:42 +0000 (0:00:00.684) 0:04:38.018 ***** 2025-11-01 13:06:30.402871 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.402876 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.402881 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.402890 | orchestrator | 2025-11-01 13:06:30.402896 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-11-01 13:06:30.402901 | orchestrator | Saturday 01 November 2025 12:58:42 +0000 (0:00:00.644) 0:04:38.663 ***** 2025-11-01 13:06:30.402907 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.402912 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.402917 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.402922 | orchestrator | 2025-11-01 13:06:30.402928 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-11-01 13:06:30.402933 | orchestrator | Saturday 01 November 2025 12:58:43 +0000 (0:00:00.451) 0:04:39.114 ***** 2025-11-01 13:06:30.402939 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.402944 | orchestrator | 2025-11-01 13:06:30.402949 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-11-01 13:06:30.402955 | orchestrator | Saturday 01 November 2025 12:58:44 +0000 (0:00:01.023) 0:04:40.137 ***** 2025-11-01 13:06:30.402960 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.402966 | orchestrator | 2025-11-01 13:06:30.402971 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-11-01 13:06:30.402991 | orchestrator | Saturday 01 November 2025 12:58:44 +0000 (0:00:00.255) 0:04:40.393 ***** 2025-11-01 13:06:30.402997 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-11-01 13:06:30.403002 | orchestrator | 2025-11-01 13:06:30.403008 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-11-01 13:06:30.403013 | orchestrator | Saturday 01 November 2025 12:58:45 +0000 (0:00:01.159) 0:04:41.552 ***** 2025-11-01 13:06:30.403019 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.403024 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.403029 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.403035 | orchestrator | 2025-11-01 13:06:30.403040 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-11-01 13:06:30.403046 | orchestrator | Saturday 01 November 2025 12:58:46 +0000 (0:00:00.810) 0:04:42.362 ***** 2025-11-01 13:06:30.403051 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.403057 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.403062 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.403067 | orchestrator | 2025-11-01 13:06:30.403073 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-11-01 13:06:30.403078 | orchestrator | Saturday 01 November 2025 12:58:47 +0000 (0:00:00.831) 0:04:43.194 ***** 2025-11-01 13:06:30.403083 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.403089 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.403094 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.403100 | orchestrator | 2025-11-01 13:06:30.403105 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-11-01 13:06:30.403111 | orchestrator | Saturday 01 November 2025 12:58:48 +0000 (0:00:01.460) 0:04:44.654 ***** 2025-11-01 13:06:30.403116 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.403121 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.403127 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.403132 | orchestrator | 2025-11-01 13:06:30.403138 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-11-01 13:06:30.403143 | orchestrator | Saturday 01 November 2025 12:58:49 +0000 (0:00:01.157) 0:04:45.811 ***** 2025-11-01 13:06:30.403148 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.403154 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.403159 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.403165 | orchestrator | 2025-11-01 13:06:30.403170 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-11-01 13:06:30.403175 | orchestrator | Saturday 01 November 2025 12:58:50 +0000 (0:00:00.911) 0:04:46.723 ***** 2025-11-01 13:06:30.403181 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.403186 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.403191 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.403216 | orchestrator | 2025-11-01 13:06:30.403222 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-11-01 13:06:30.403227 | orchestrator | Saturday 01 November 2025 12:58:51 +0000 (0:00:00.709) 0:04:47.432 ***** 2025-11-01 13:06:30.403232 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.403238 | orchestrator | 2025-11-01 13:06:30.403243 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-11-01 13:06:30.403249 | orchestrator | Saturday 01 November 2025 12:58:53 +0000 (0:00:02.060) 0:04:49.493 ***** 2025-11-01 13:06:30.403254 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.403259 | orchestrator | 2025-11-01 13:06:30.403265 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-11-01 13:06:30.403270 | orchestrator | Saturday 01 November 2025 12:58:54 +0000 (0:00:00.962) 0:04:50.455 ***** 2025-11-01 13:06:30.403275 | orchestrator | changed: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:06:30.403281 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-11-01 13:06:30.403286 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:06:30.403292 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-11-01 13:06:30.403300 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-01 13:06:30.403306 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-01 13:06:30.403311 | orchestrator | changed: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-01 13:06:30.403317 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2025-11-01 13:06:30.403322 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-01 13:06:30.403327 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2025-11-01 13:06:30.403333 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-11-01 13:06:30.403338 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-11-01 13:06:30.403344 | orchestrator | 2025-11-01 13:06:30.403349 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-11-01 13:06:30.403355 | orchestrator | Saturday 01 November 2025 12:58:58 +0000 (0:00:04.236) 0:04:54.692 ***** 2025-11-01 13:06:30.403360 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.403365 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.403371 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.403376 | orchestrator | 2025-11-01 13:06:30.403382 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-11-01 13:06:30.403387 | orchestrator | Saturday 01 November 2025 12:59:00 +0000 (0:00:01.430) 0:04:56.123 ***** 2025-11-01 13:06:30.403392 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.403398 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.403403 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.403408 | orchestrator | 2025-11-01 13:06:30.403414 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-11-01 13:06:30.403419 | orchestrator | Saturday 01 November 2025 12:59:00 +0000 (0:00:00.749) 0:04:56.872 ***** 2025-11-01 13:06:30.403425 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.403430 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.403435 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.403441 | orchestrator | 2025-11-01 13:06:30.403446 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-11-01 13:06:30.403451 | orchestrator | Saturday 01 November 2025 12:59:02 +0000 (0:00:01.088) 0:04:57.960 ***** 2025-11-01 13:06:30.403457 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.403462 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.403467 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.403473 | orchestrator | 2025-11-01 13:06:30.403492 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-11-01 13:06:30.403499 | orchestrator | Saturday 01 November 2025 12:59:04 +0000 (0:00:02.068) 0:05:00.029 ***** 2025-11-01 13:06:30.403504 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.403514 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.403519 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.403524 | orchestrator | 2025-11-01 13:06:30.403530 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-11-01 13:06:30.403535 | orchestrator | Saturday 01 November 2025 12:59:05 +0000 (0:00:01.472) 0:05:01.501 ***** 2025-11-01 13:06:30.403541 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.403546 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.403551 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.403557 | orchestrator | 2025-11-01 13:06:30.403562 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-11-01 13:06:30.403567 | orchestrator | Saturday 01 November 2025 12:59:05 +0000 (0:00:00.391) 0:05:01.893 ***** 2025-11-01 13:06:30.403573 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.403578 | orchestrator | 2025-11-01 13:06:30.403584 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-11-01 13:06:30.403589 | orchestrator | Saturday 01 November 2025 12:59:06 +0000 (0:00:00.911) 0:05:02.804 ***** 2025-11-01 13:06:30.403594 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.403600 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.403605 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.403610 | orchestrator | 2025-11-01 13:06:30.403616 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-11-01 13:06:30.403621 | orchestrator | Saturday 01 November 2025 12:59:07 +0000 (0:00:00.427) 0:05:03.232 ***** 2025-11-01 13:06:30.403627 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.403632 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.403637 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.403643 | orchestrator | 2025-11-01 13:06:30.403648 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-11-01 13:06:30.403654 | orchestrator | Saturday 01 November 2025 12:59:07 +0000 (0:00:00.399) 0:05:03.631 ***** 2025-11-01 13:06:30.403659 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.403664 | orchestrator | 2025-11-01 13:06:30.403670 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-11-01 13:06:30.403675 | orchestrator | Saturday 01 November 2025 12:59:08 +0000 (0:00:01.022) 0:05:04.653 ***** 2025-11-01 13:06:30.403680 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.403686 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.403691 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.403697 | orchestrator | 2025-11-01 13:06:30.403702 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-11-01 13:06:30.403707 | orchestrator | Saturday 01 November 2025 12:59:10 +0000 (0:00:01.964) 0:05:06.618 ***** 2025-11-01 13:06:30.403713 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.403718 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.403724 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.403729 | orchestrator | 2025-11-01 13:06:30.403734 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-11-01 13:06:30.403740 | orchestrator | Saturday 01 November 2025 12:59:11 +0000 (0:00:01.199) 0:05:07.818 ***** 2025-11-01 13:06:30.403745 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.403750 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.403756 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.403761 | orchestrator | 2025-11-01 13:06:30.403766 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-11-01 13:06:30.403775 | orchestrator | Saturday 01 November 2025 12:59:13 +0000 (0:00:01.805) 0:05:09.624 ***** 2025-11-01 13:06:30.403780 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.403786 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.403791 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.403800 | orchestrator | 2025-11-01 13:06:30.403805 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-11-01 13:06:30.403811 | orchestrator | Saturday 01 November 2025 12:59:15 +0000 (0:00:02.294) 0:05:11.918 ***** 2025-11-01 13:06:30.403816 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.403821 | orchestrator | 2025-11-01 13:06:30.403827 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-11-01 13:06:30.403832 | orchestrator | Saturday 01 November 2025 12:59:16 +0000 (0:00:00.680) 0:05:12.599 ***** 2025-11-01 13:06:30.403837 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-11-01 13:06:30.403843 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.403848 | orchestrator | 2025-11-01 13:06:30.403854 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-11-01 13:06:30.403859 | orchestrator | Saturday 01 November 2025 12:59:38 +0000 (0:00:21.992) 0:05:34.591 ***** 2025-11-01 13:06:30.403865 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.403870 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.403875 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.403881 | orchestrator | 2025-11-01 13:06:30.403886 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-11-01 13:06:30.403892 | orchestrator | Saturday 01 November 2025 12:59:49 +0000 (0:00:10.426) 0:05:45.018 ***** 2025-11-01 13:06:30.403897 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.403902 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.403908 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.403913 | orchestrator | 2025-11-01 13:06:30.403918 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-11-01 13:06:30.403924 | orchestrator | Saturday 01 November 2025 12:59:49 +0000 (0:00:00.791) 0:05:45.810 ***** 2025-11-01 13:06:30.403944 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2bd0ebad2a8741a88791cd2e1b3331da727947e9'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-11-01 13:06:30.403952 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2bd0ebad2a8741a88791cd2e1b3331da727947e9'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-11-01 13:06:30.403958 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2bd0ebad2a8741a88791cd2e1b3331da727947e9'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-11-01 13:06:30.403964 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2bd0ebad2a8741a88791cd2e1b3331da727947e9'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-11-01 13:06:30.403970 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2bd0ebad2a8741a88791cd2e1b3331da727947e9'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-11-01 13:06:30.403976 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2bd0ebad2a8741a88791cd2e1b3331da727947e9'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__2bd0ebad2a8741a88791cd2e1b3331da727947e9'}])  2025-11-01 13:06:30.403987 | orchestrator | 2025-11-01 13:06:30.403992 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-01 13:06:30.403998 | orchestrator | Saturday 01 November 2025 13:00:05 +0000 (0:00:15.440) 0:06:01.250 ***** 2025-11-01 13:06:30.404003 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.404008 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.404014 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.404019 | orchestrator | 2025-11-01 13:06:30.404027 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-11-01 13:06:30.404033 | orchestrator | Saturday 01 November 2025 13:00:05 +0000 (0:00:00.423) 0:06:01.674 ***** 2025-11-01 13:06:30.404038 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-11-01 13:06:30.404044 | orchestrator | 2025-11-01 13:06:30.404049 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-11-01 13:06:30.404055 | orchestrator | Saturday 01 November 2025 13:00:06 +0000 (0:00:01.085) 0:06:02.759 ***** 2025-11-01 13:06:30.404060 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.404065 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.404071 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.404076 | orchestrator | 2025-11-01 13:06:30.404081 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-11-01 13:06:30.404087 | orchestrator | Saturday 01 November 2025 13:00:07 +0000 (0:00:00.507) 0:06:03.267 ***** 2025-11-01 13:06:30.404092 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.404098 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.404103 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.404108 | orchestrator | 2025-11-01 13:06:30.404114 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-11-01 13:06:30.404119 | orchestrator | Saturday 01 November 2025 13:00:07 +0000 (0:00:00.586) 0:06:03.853 ***** 2025-11-01 13:06:30.404125 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-01 13:06:30.404130 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-01 13:06:30.404135 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-01 13:06:30.404141 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.404146 | orchestrator | 2025-11-01 13:06:30.404151 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-11-01 13:06:30.404157 | orchestrator | Saturday 01 November 2025 13:00:09 +0000 (0:00:01.397) 0:06:05.251 ***** 2025-11-01 13:06:30.404162 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.404167 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.404173 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.404178 | orchestrator | 2025-11-01 13:06:30.404197 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-11-01 13:06:30.404215 | orchestrator | 2025-11-01 13:06:30.404220 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-01 13:06:30.404226 | orchestrator | Saturday 01 November 2025 13:00:10 +0000 (0:00:00.691) 0:06:05.943 ***** 2025-11-01 13:06:30.404231 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.404237 | orchestrator | 2025-11-01 13:06:30.404242 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-01 13:06:30.404247 | orchestrator | Saturday 01 November 2025 13:00:10 +0000 (0:00:00.628) 0:06:06.571 ***** 2025-11-01 13:06:30.404253 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.404262 | orchestrator | 2025-11-01 13:06:30.404267 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-01 13:06:30.404273 | orchestrator | Saturday 01 November 2025 13:00:11 +0000 (0:00:00.938) 0:06:07.509 ***** 2025-11-01 13:06:30.404278 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.404283 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.404289 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.404294 | orchestrator | 2025-11-01 13:06:30.404299 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-01 13:06:30.404305 | orchestrator | Saturday 01 November 2025 13:00:12 +0000 (0:00:00.836) 0:06:08.346 ***** 2025-11-01 13:06:30.404310 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.404316 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.404321 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.404326 | orchestrator | 2025-11-01 13:06:30.404332 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-01 13:06:30.404337 | orchestrator | Saturday 01 November 2025 13:00:12 +0000 (0:00:00.337) 0:06:08.683 ***** 2025-11-01 13:06:30.404342 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.404348 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.404353 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.404359 | orchestrator | 2025-11-01 13:06:30.404364 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-01 13:06:30.404369 | orchestrator | Saturday 01 November 2025 13:00:13 +0000 (0:00:00.642) 0:06:09.325 ***** 2025-11-01 13:06:30.404375 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.404380 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.404385 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.404391 | orchestrator | 2025-11-01 13:06:30.404396 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-01 13:06:30.404401 | orchestrator | Saturday 01 November 2025 13:00:13 +0000 (0:00:00.372) 0:06:09.697 ***** 2025-11-01 13:06:30.404407 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.404412 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.404417 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.404423 | orchestrator | 2025-11-01 13:06:30.404428 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-01 13:06:30.404433 | orchestrator | Saturday 01 November 2025 13:00:14 +0000 (0:00:00.825) 0:06:10.523 ***** 2025-11-01 13:06:30.404439 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.404444 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.404449 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.404455 | orchestrator | 2025-11-01 13:06:30.404460 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-01 13:06:30.404465 | orchestrator | Saturday 01 November 2025 13:00:15 +0000 (0:00:00.407) 0:06:10.930 ***** 2025-11-01 13:06:30.404471 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.404476 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.404481 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.404487 | orchestrator | 2025-11-01 13:06:30.404495 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-01 13:06:30.404501 | orchestrator | Saturday 01 November 2025 13:00:15 +0000 (0:00:00.757) 0:06:11.688 ***** 2025-11-01 13:06:30.404506 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.404512 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.404517 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.404522 | orchestrator | 2025-11-01 13:06:30.404528 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-01 13:06:30.404533 | orchestrator | Saturday 01 November 2025 13:00:16 +0000 (0:00:00.690) 0:06:12.379 ***** 2025-11-01 13:06:30.404538 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.404544 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.404549 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.404554 | orchestrator | 2025-11-01 13:06:30.404560 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-01 13:06:30.404569 | orchestrator | Saturday 01 November 2025 13:00:17 +0000 (0:00:00.783) 0:06:13.162 ***** 2025-11-01 13:06:30.404574 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.404579 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.404585 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.404590 | orchestrator | 2025-11-01 13:06:30.404595 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-01 13:06:30.404601 | orchestrator | Saturday 01 November 2025 13:00:17 +0000 (0:00:00.364) 0:06:13.526 ***** 2025-11-01 13:06:30.404606 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.404611 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.404617 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.404622 | orchestrator | 2025-11-01 13:06:30.404627 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-01 13:06:30.404633 | orchestrator | Saturday 01 November 2025 13:00:18 +0000 (0:00:00.701) 0:06:14.227 ***** 2025-11-01 13:06:30.404638 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.404644 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.404649 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.404654 | orchestrator | 2025-11-01 13:06:30.404660 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-01 13:06:30.404665 | orchestrator | Saturday 01 November 2025 13:00:18 +0000 (0:00:00.373) 0:06:14.601 ***** 2025-11-01 13:06:30.404670 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.404676 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.404695 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.404701 | orchestrator | 2025-11-01 13:06:30.404707 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-01 13:06:30.404712 | orchestrator | Saturday 01 November 2025 13:00:19 +0000 (0:00:00.383) 0:06:14.984 ***** 2025-11-01 13:06:30.404717 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.404723 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.404728 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.404733 | orchestrator | 2025-11-01 13:06:30.404739 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-01 13:06:30.404744 | orchestrator | Saturday 01 November 2025 13:00:19 +0000 (0:00:00.409) 0:06:15.393 ***** 2025-11-01 13:06:30.404749 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.404755 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.404760 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.404765 | orchestrator | 2025-11-01 13:06:30.404771 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-01 13:06:30.404776 | orchestrator | Saturday 01 November 2025 13:00:19 +0000 (0:00:00.504) 0:06:15.897 ***** 2025-11-01 13:06:30.404781 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.404787 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.404792 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.404797 | orchestrator | 2025-11-01 13:06:30.404803 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-01 13:06:30.404808 | orchestrator | Saturday 01 November 2025 13:00:20 +0000 (0:00:00.683) 0:06:16.581 ***** 2025-11-01 13:06:30.404814 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.404819 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.404824 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.404830 | orchestrator | 2025-11-01 13:06:30.404835 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-01 13:06:30.404840 | orchestrator | Saturday 01 November 2025 13:00:21 +0000 (0:00:00.413) 0:06:16.994 ***** 2025-11-01 13:06:30.404846 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.404851 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.404856 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.404861 | orchestrator | 2025-11-01 13:06:30.404867 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-01 13:06:30.404872 | orchestrator | Saturday 01 November 2025 13:00:21 +0000 (0:00:00.406) 0:06:17.400 ***** 2025-11-01 13:06:30.404883 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.404888 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.404894 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.404899 | orchestrator | 2025-11-01 13:06:30.404904 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-11-01 13:06:30.404910 | orchestrator | Saturday 01 November 2025 13:00:22 +0000 (0:00:00.860) 0:06:18.261 ***** 2025-11-01 13:06:30.404915 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-01 13:06:30.404921 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-01 13:06:30.404926 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-01 13:06:30.404931 | orchestrator | 2025-11-01 13:06:30.404937 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-11-01 13:06:30.404942 | orchestrator | Saturday 01 November 2025 13:00:23 +0000 (0:00:00.835) 0:06:19.096 ***** 2025-11-01 13:06:30.404948 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.404953 | orchestrator | 2025-11-01 13:06:30.404958 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-11-01 13:06:30.404964 | orchestrator | Saturday 01 November 2025 13:00:23 +0000 (0:00:00.637) 0:06:19.733 ***** 2025-11-01 13:06:30.404969 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.404974 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.404983 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.404989 | orchestrator | 2025-11-01 13:06:30.404994 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-11-01 13:06:30.405000 | orchestrator | Saturday 01 November 2025 13:00:24 +0000 (0:00:00.751) 0:06:20.484 ***** 2025-11-01 13:06:30.405005 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.405011 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.405016 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.405021 | orchestrator | 2025-11-01 13:06:30.405027 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-11-01 13:06:30.405032 | orchestrator | Saturday 01 November 2025 13:00:25 +0000 (0:00:00.615) 0:06:21.100 ***** 2025-11-01 13:06:30.405037 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-01 13:06:30.405043 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-01 13:06:30.405048 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-01 13:06:30.405053 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-11-01 13:06:30.405059 | orchestrator | 2025-11-01 13:06:30.405064 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-11-01 13:06:30.405069 | orchestrator | Saturday 01 November 2025 13:00:36 +0000 (0:00:11.374) 0:06:32.474 ***** 2025-11-01 13:06:30.405075 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.405080 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.405086 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.405091 | orchestrator | 2025-11-01 13:06:30.405096 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-11-01 13:06:30.405102 | orchestrator | Saturday 01 November 2025 13:00:37 +0000 (0:00:00.458) 0:06:32.933 ***** 2025-11-01 13:06:30.405107 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-11-01 13:06:30.405112 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-11-01 13:06:30.405118 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-11-01 13:06:30.405123 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-11-01 13:06:30.405128 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:06:30.405134 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:06:30.405139 | orchestrator | 2025-11-01 13:06:30.405158 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-11-01 13:06:30.405168 | orchestrator | Saturday 01 November 2025 13:00:39 +0000 (0:00:02.303) 0:06:35.236 ***** 2025-11-01 13:06:30.405174 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-11-01 13:06:30.405179 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-11-01 13:06:30.405184 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-11-01 13:06:30.405190 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-01 13:06:30.405195 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-11-01 13:06:30.405213 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-11-01 13:06:30.405219 | orchestrator | 2025-11-01 13:06:30.405224 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-11-01 13:06:30.405230 | orchestrator | Saturday 01 November 2025 13:00:40 +0000 (0:00:01.344) 0:06:36.581 ***** 2025-11-01 13:06:30.405235 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.405241 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.405246 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.405251 | orchestrator | 2025-11-01 13:06:30.405257 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-11-01 13:06:30.405262 | orchestrator | Saturday 01 November 2025 13:00:41 +0000 (0:00:01.103) 0:06:37.684 ***** 2025-11-01 13:06:30.405268 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.405273 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.405278 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.405284 | orchestrator | 2025-11-01 13:06:30.405289 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-11-01 13:06:30.405295 | orchestrator | Saturday 01 November 2025 13:00:42 +0000 (0:00:00.372) 0:06:38.056 ***** 2025-11-01 13:06:30.405300 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.405306 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.405311 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.405316 | orchestrator | 2025-11-01 13:06:30.405322 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-11-01 13:06:30.405327 | orchestrator | Saturday 01 November 2025 13:00:42 +0000 (0:00:00.355) 0:06:38.412 ***** 2025-11-01 13:06:30.405332 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.405338 | orchestrator | 2025-11-01 13:06:30.405343 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-11-01 13:06:30.405349 | orchestrator | Saturday 01 November 2025 13:00:43 +0000 (0:00:01.109) 0:06:39.522 ***** 2025-11-01 13:06:30.405354 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.405360 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.405365 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.405370 | orchestrator | 2025-11-01 13:06:30.405376 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-11-01 13:06:30.405381 | orchestrator | Saturday 01 November 2025 13:00:44 +0000 (0:00:00.553) 0:06:40.076 ***** 2025-11-01 13:06:30.405386 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.405392 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.405397 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.405403 | orchestrator | 2025-11-01 13:06:30.405408 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-11-01 13:06:30.405413 | orchestrator | Saturday 01 November 2025 13:00:44 +0000 (0:00:00.377) 0:06:40.453 ***** 2025-11-01 13:06:30.405419 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.405424 | orchestrator | 2025-11-01 13:06:30.405430 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-11-01 13:06:30.405435 | orchestrator | Saturday 01 November 2025 13:00:45 +0000 (0:00:00.885) 0:06:41.338 ***** 2025-11-01 13:06:30.405441 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.405449 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.405454 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.405460 | orchestrator | 2025-11-01 13:06:30.405470 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-11-01 13:06:30.405475 | orchestrator | Saturday 01 November 2025 13:00:46 +0000 (0:00:01.342) 0:06:42.681 ***** 2025-11-01 13:06:30.405480 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.405486 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.405491 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.405497 | orchestrator | 2025-11-01 13:06:30.405502 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-11-01 13:06:30.405507 | orchestrator | Saturday 01 November 2025 13:00:47 +0000 (0:00:01.101) 0:06:43.782 ***** 2025-11-01 13:06:30.405513 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.405518 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.405524 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.405529 | orchestrator | 2025-11-01 13:06:30.405534 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-11-01 13:06:30.405540 | orchestrator | Saturday 01 November 2025 13:00:49 +0000 (0:00:01.774) 0:06:45.557 ***** 2025-11-01 13:06:30.405545 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.405551 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.405556 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.405561 | orchestrator | 2025-11-01 13:06:30.405567 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-11-01 13:06:30.405572 | orchestrator | Saturday 01 November 2025 13:00:51 +0000 (0:00:02.254) 0:06:47.812 ***** 2025-11-01 13:06:30.405578 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.405583 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.405588 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-11-01 13:06:30.405594 | orchestrator | 2025-11-01 13:06:30.405599 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-11-01 13:06:30.405605 | orchestrator | Saturday 01 November 2025 13:00:52 +0000 (0:00:00.486) 0:06:48.298 ***** 2025-11-01 13:06:30.405610 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-11-01 13:06:30.405630 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-11-01 13:06:30.405636 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-11-01 13:06:30.405642 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-11-01 13:06:30.405647 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-11-01 13:06:30.405653 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-11-01 13:06:30.405658 | orchestrator | 2025-11-01 13:06:30.405663 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-11-01 13:06:30.405669 | orchestrator | Saturday 01 November 2025 13:01:22 +0000 (0:00:30.536) 0:07:18.834 ***** 2025-11-01 13:06:30.405674 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-11-01 13:06:30.405679 | orchestrator | 2025-11-01 13:06:30.405685 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-11-01 13:06:30.405690 | orchestrator | Saturday 01 November 2025 13:01:24 +0000 (0:00:01.373) 0:07:20.208 ***** 2025-11-01 13:06:30.405695 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.405701 | orchestrator | 2025-11-01 13:06:30.405706 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-11-01 13:06:30.405711 | orchestrator | Saturday 01 November 2025 13:01:24 +0000 (0:00:00.358) 0:07:20.566 ***** 2025-11-01 13:06:30.405717 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.405722 | orchestrator | 2025-11-01 13:06:30.405728 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-11-01 13:06:30.405733 | orchestrator | Saturday 01 November 2025 13:01:24 +0000 (0:00:00.152) 0:07:20.719 ***** 2025-11-01 13:06:30.405742 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-11-01 13:06:30.405747 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-11-01 13:06:30.405753 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-11-01 13:06:30.405758 | orchestrator | 2025-11-01 13:06:30.405763 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-11-01 13:06:30.405769 | orchestrator | Saturday 01 November 2025 13:01:31 +0000 (0:00:06.750) 0:07:27.469 ***** 2025-11-01 13:06:30.405774 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-11-01 13:06:30.405779 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-11-01 13:06:30.405785 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-11-01 13:06:30.405790 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-11-01 13:06:30.405796 | orchestrator | 2025-11-01 13:06:30.405801 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-01 13:06:30.405806 | orchestrator | Saturday 01 November 2025 13:01:37 +0000 (0:00:05.560) 0:07:33.030 ***** 2025-11-01 13:06:30.405812 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.405817 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.405822 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.405828 | orchestrator | 2025-11-01 13:06:30.405833 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-11-01 13:06:30.405838 | orchestrator | Saturday 01 November 2025 13:01:37 +0000 (0:00:00.842) 0:07:33.873 ***** 2025-11-01 13:06:30.405844 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.405849 | orchestrator | 2025-11-01 13:06:30.405857 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-11-01 13:06:30.405863 | orchestrator | Saturday 01 November 2025 13:01:38 +0000 (0:00:00.888) 0:07:34.761 ***** 2025-11-01 13:06:30.405868 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.405874 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.405879 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.405884 | orchestrator | 2025-11-01 13:06:30.405890 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-11-01 13:06:30.405895 | orchestrator | Saturday 01 November 2025 13:01:39 +0000 (0:00:00.370) 0:07:35.132 ***** 2025-11-01 13:06:30.405900 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.405906 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.405911 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.405916 | orchestrator | 2025-11-01 13:06:30.405922 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-11-01 13:06:30.405927 | orchestrator | Saturday 01 November 2025 13:01:40 +0000 (0:00:01.307) 0:07:36.439 ***** 2025-11-01 13:06:30.405932 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-11-01 13:06:30.405938 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-11-01 13:06:30.405943 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-11-01 13:06:30.405948 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.405954 | orchestrator | 2025-11-01 13:06:30.405959 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-11-01 13:06:30.405965 | orchestrator | Saturday 01 November 2025 13:01:41 +0000 (0:00:00.678) 0:07:37.118 ***** 2025-11-01 13:06:30.405970 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.405975 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.405981 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.405986 | orchestrator | 2025-11-01 13:06:30.405991 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-11-01 13:06:30.405997 | orchestrator | 2025-11-01 13:06:30.406002 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-01 13:06:30.406007 | orchestrator | Saturday 01 November 2025 13:01:42 +0000 (0:00:00.863) 0:07:37.981 ***** 2025-11-01 13:06:30.406031 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.406038 | orchestrator | 2025-11-01 13:06:30.406060 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-01 13:06:30.406066 | orchestrator | Saturday 01 November 2025 13:01:42 +0000 (0:00:00.581) 0:07:38.563 ***** 2025-11-01 13:06:30.406072 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.406077 | orchestrator | 2025-11-01 13:06:30.406083 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-01 13:06:30.406088 | orchestrator | Saturday 01 November 2025 13:01:43 +0000 (0:00:00.856) 0:07:39.420 ***** 2025-11-01 13:06:30.406094 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.406099 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.406105 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.406110 | orchestrator | 2025-11-01 13:06:30.406116 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-01 13:06:30.406121 | orchestrator | Saturday 01 November 2025 13:01:43 +0000 (0:00:00.358) 0:07:39.778 ***** 2025-11-01 13:06:30.406127 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.406132 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.406138 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.406143 | orchestrator | 2025-11-01 13:06:30.406148 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-01 13:06:30.406154 | orchestrator | Saturday 01 November 2025 13:01:44 +0000 (0:00:00.817) 0:07:40.595 ***** 2025-11-01 13:06:30.406159 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.406165 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.406170 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.406176 | orchestrator | 2025-11-01 13:06:30.406181 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-01 13:06:30.406186 | orchestrator | Saturday 01 November 2025 13:01:45 +0000 (0:00:00.796) 0:07:41.392 ***** 2025-11-01 13:06:30.406192 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.406197 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.406231 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.406237 | orchestrator | 2025-11-01 13:06:30.406242 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-01 13:06:30.406248 | orchestrator | Saturday 01 November 2025 13:01:46 +0000 (0:00:01.100) 0:07:42.493 ***** 2025-11-01 13:06:30.406253 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.406258 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.406264 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.406269 | orchestrator | 2025-11-01 13:06:30.406274 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-01 13:06:30.406280 | orchestrator | Saturday 01 November 2025 13:01:46 +0000 (0:00:00.357) 0:07:42.851 ***** 2025-11-01 13:06:30.406285 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.406291 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.406296 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.406302 | orchestrator | 2025-11-01 13:06:30.406307 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-01 13:06:30.406312 | orchestrator | Saturday 01 November 2025 13:01:47 +0000 (0:00:00.351) 0:07:43.202 ***** 2025-11-01 13:06:30.406318 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.406323 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.406329 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.406334 | orchestrator | 2025-11-01 13:06:30.406339 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-01 13:06:30.406345 | orchestrator | Saturday 01 November 2025 13:01:47 +0000 (0:00:00.344) 0:07:43.547 ***** 2025-11-01 13:06:30.406350 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.406355 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.406365 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.406370 | orchestrator | 2025-11-01 13:06:30.406376 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-01 13:06:30.406384 | orchestrator | Saturday 01 November 2025 13:01:48 +0000 (0:00:01.015) 0:07:44.563 ***** 2025-11-01 13:06:30.406390 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.406395 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.406400 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.406406 | orchestrator | 2025-11-01 13:06:30.406411 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-01 13:06:30.406416 | orchestrator | Saturday 01 November 2025 13:01:49 +0000 (0:00:00.754) 0:07:45.318 ***** 2025-11-01 13:06:30.406422 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.406427 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.406433 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.406438 | orchestrator | 2025-11-01 13:06:30.406443 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-01 13:06:30.406449 | orchestrator | Saturday 01 November 2025 13:01:49 +0000 (0:00:00.360) 0:07:45.679 ***** 2025-11-01 13:06:30.406454 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.406459 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.406464 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.406468 | orchestrator | 2025-11-01 13:06:30.406473 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-01 13:06:30.406478 | orchestrator | Saturday 01 November 2025 13:01:50 +0000 (0:00:00.352) 0:07:46.032 ***** 2025-11-01 13:06:30.406483 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.406488 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.406492 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.406497 | orchestrator | 2025-11-01 13:06:30.406502 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-01 13:06:30.406507 | orchestrator | Saturday 01 November 2025 13:01:50 +0000 (0:00:00.631) 0:07:46.663 ***** 2025-11-01 13:06:30.406511 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.406516 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.406521 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.406526 | orchestrator | 2025-11-01 13:06:30.406530 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-01 13:06:30.406535 | orchestrator | Saturday 01 November 2025 13:01:51 +0000 (0:00:00.379) 0:07:47.042 ***** 2025-11-01 13:06:30.406540 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.406545 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.406549 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.406554 | orchestrator | 2025-11-01 13:06:30.406561 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-01 13:06:30.406566 | orchestrator | Saturday 01 November 2025 13:01:51 +0000 (0:00:00.369) 0:07:47.411 ***** 2025-11-01 13:06:30.406571 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.406576 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.406581 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.406585 | orchestrator | 2025-11-01 13:06:30.406590 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-01 13:06:30.406595 | orchestrator | Saturday 01 November 2025 13:01:51 +0000 (0:00:00.348) 0:07:47.760 ***** 2025-11-01 13:06:30.406600 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.406604 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.406609 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.406614 | orchestrator | 2025-11-01 13:06:30.406619 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-01 13:06:30.406624 | orchestrator | Saturday 01 November 2025 13:01:52 +0000 (0:00:00.657) 0:07:48.417 ***** 2025-11-01 13:06:30.406628 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.406633 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.406638 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.406643 | orchestrator | 2025-11-01 13:06:30.406651 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-01 13:06:30.406656 | orchestrator | Saturday 01 November 2025 13:01:52 +0000 (0:00:00.329) 0:07:48.746 ***** 2025-11-01 13:06:30.406660 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.406665 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.406670 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.406675 | orchestrator | 2025-11-01 13:06:30.406679 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-01 13:06:30.406684 | orchestrator | Saturday 01 November 2025 13:01:53 +0000 (0:00:00.367) 0:07:49.114 ***** 2025-11-01 13:06:30.406689 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.406694 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.406698 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.406703 | orchestrator | 2025-11-01 13:06:30.406708 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-11-01 13:06:30.406712 | orchestrator | Saturday 01 November 2025 13:01:54 +0000 (0:00:00.848) 0:07:49.963 ***** 2025-11-01 13:06:30.406717 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.406722 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.406727 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.406732 | orchestrator | 2025-11-01 13:06:30.406736 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-11-01 13:06:30.406741 | orchestrator | Saturday 01 November 2025 13:01:54 +0000 (0:00:00.381) 0:07:50.345 ***** 2025-11-01 13:06:30.406746 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-01 13:06:30.406751 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-01 13:06:30.406756 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-01 13:06:30.406760 | orchestrator | 2025-11-01 13:06:30.406765 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-11-01 13:06:30.406770 | orchestrator | Saturday 01 November 2025 13:01:55 +0000 (0:00:00.708) 0:07:51.053 ***** 2025-11-01 13:06:30.406775 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.406780 | orchestrator | 2025-11-01 13:06:30.406784 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-11-01 13:06:30.406789 | orchestrator | Saturday 01 November 2025 13:01:55 +0000 (0:00:00.654) 0:07:51.708 ***** 2025-11-01 13:06:30.406794 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.406799 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.406804 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.406808 | orchestrator | 2025-11-01 13:06:30.406815 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-11-01 13:06:30.406820 | orchestrator | Saturday 01 November 2025 13:01:56 +0000 (0:00:00.613) 0:07:52.321 ***** 2025-11-01 13:06:30.406825 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.406830 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.406835 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.406840 | orchestrator | 2025-11-01 13:06:30.406844 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-11-01 13:06:30.406849 | orchestrator | Saturday 01 November 2025 13:01:56 +0000 (0:00:00.377) 0:07:52.699 ***** 2025-11-01 13:06:30.406854 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.406859 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.406864 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.406868 | orchestrator | 2025-11-01 13:06:30.406873 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-11-01 13:06:30.406878 | orchestrator | Saturday 01 November 2025 13:01:57 +0000 (0:00:00.633) 0:07:53.332 ***** 2025-11-01 13:06:30.406883 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.406887 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.406892 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.406897 | orchestrator | 2025-11-01 13:06:30.406902 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-11-01 13:06:30.406909 | orchestrator | Saturday 01 November 2025 13:01:57 +0000 (0:00:00.419) 0:07:53.751 ***** 2025-11-01 13:06:30.406914 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-11-01 13:06:30.406919 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-11-01 13:06:30.406924 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-11-01 13:06:30.406929 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-11-01 13:06:30.406934 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-11-01 13:06:30.406938 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-11-01 13:06:30.406947 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-11-01 13:06:30.406952 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-11-01 13:06:30.406956 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-11-01 13:06:30.406961 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-11-01 13:06:30.406966 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-11-01 13:06:30.406971 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-11-01 13:06:30.406976 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-11-01 13:06:30.406981 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-11-01 13:06:30.406985 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-11-01 13:06:30.406990 | orchestrator | 2025-11-01 13:06:30.406995 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-11-01 13:06:30.407000 | orchestrator | Saturday 01 November 2025 13:02:00 +0000 (0:00:02.579) 0:07:56.331 ***** 2025-11-01 13:06:30.407005 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.407010 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.407015 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.407020 | orchestrator | 2025-11-01 13:06:30.407024 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-11-01 13:06:30.407029 | orchestrator | Saturday 01 November 2025 13:02:00 +0000 (0:00:00.399) 0:07:56.731 ***** 2025-11-01 13:06:30.407034 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.407039 | orchestrator | 2025-11-01 13:06:30.407044 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-11-01 13:06:30.407049 | orchestrator | Saturday 01 November 2025 13:02:01 +0000 (0:00:00.584) 0:07:57.315 ***** 2025-11-01 13:06:30.407054 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-11-01 13:06:30.407058 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-11-01 13:06:30.407063 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-11-01 13:06:30.407068 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-11-01 13:06:30.407073 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-11-01 13:06:30.407078 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-11-01 13:06:30.407083 | orchestrator | 2025-11-01 13:06:30.407088 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-11-01 13:06:30.407093 | orchestrator | Saturday 01 November 2025 13:02:02 +0000 (0:00:01.303) 0:07:58.619 ***** 2025-11-01 13:06:30.407098 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:06:30.407102 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-01 13:06:30.407111 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-01 13:06:30.407116 | orchestrator | 2025-11-01 13:06:30.407120 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-11-01 13:06:30.407125 | orchestrator | Saturday 01 November 2025 13:02:04 +0000 (0:00:02.173) 0:08:00.792 ***** 2025-11-01 13:06:30.407130 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-01 13:06:30.407135 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-01 13:06:30.407143 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.407148 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-01 13:06:30.407153 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-11-01 13:06:30.407158 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.407163 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-01 13:06:30.407168 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-11-01 13:06:30.407172 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.407177 | orchestrator | 2025-11-01 13:06:30.407182 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-11-01 13:06:30.407187 | orchestrator | Saturday 01 November 2025 13:02:06 +0000 (0:00:01.145) 0:08:01.937 ***** 2025-11-01 13:06:30.407192 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-01 13:06:30.407197 | orchestrator | 2025-11-01 13:06:30.407213 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-11-01 13:06:30.407218 | orchestrator | Saturday 01 November 2025 13:02:08 +0000 (0:00:02.160) 0:08:04.098 ***** 2025-11-01 13:06:30.407223 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.407228 | orchestrator | 2025-11-01 13:06:30.407233 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-11-01 13:06:30.407238 | orchestrator | Saturday 01 November 2025 13:02:08 +0000 (0:00:00.633) 0:08:04.731 ***** 2025-11-01 13:06:30.407243 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-277f9d3d-0c20-556e-833f-7bea0f2408d1', 'data_vg': 'ceph-277f9d3d-0c20-556e-833f-7bea0f2408d1'}) 2025-11-01 13:06:30.407248 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d83d2135-3529-5759-9738-6f5d85bcdaef', 'data_vg': 'ceph-d83d2135-3529-5759-9738-6f5d85bcdaef'}) 2025-11-01 13:06:30.407253 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fea132eb-9454-553c-8b4e-faa263198857', 'data_vg': 'ceph-fea132eb-9454-553c-8b4e-faa263198857'}) 2025-11-01 13:06:30.407261 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-780930f3-bf13-5252-a15a-5f9f469ca774', 'data_vg': 'ceph-780930f3-bf13-5252-a15a-5f9f469ca774'}) 2025-11-01 13:06:30.407266 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1e995aa1-0e3d-5a0e-8d57-e00715a81a73', 'data_vg': 'ceph-1e995aa1-0e3d-5a0e-8d57-e00715a81a73'}) 2025-11-01 13:06:30.407271 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2d34deeb-c147-51f6-865b-40ba131b62ad', 'data_vg': 'ceph-2d34deeb-c147-51f6-865b-40ba131b62ad'}) 2025-11-01 13:06:30.407275 | orchestrator | 2025-11-01 13:06:30.407280 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-11-01 13:06:30.407285 | orchestrator | Saturday 01 November 2025 13:02:52 +0000 (0:00:43.998) 0:08:48.730 ***** 2025-11-01 13:06:30.407290 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.407295 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.407300 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.407304 | orchestrator | 2025-11-01 13:06:30.407309 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-11-01 13:06:30.407314 | orchestrator | Saturday 01 November 2025 13:02:53 +0000 (0:00:00.353) 0:08:49.084 ***** 2025-11-01 13:06:30.407319 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.407329 | orchestrator | 2025-11-01 13:06:30.407334 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-11-01 13:06:30.407339 | orchestrator | Saturday 01 November 2025 13:02:53 +0000 (0:00:00.579) 0:08:49.663 ***** 2025-11-01 13:06:30.407343 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.407348 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.407353 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.407358 | orchestrator | 2025-11-01 13:06:30.407362 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-11-01 13:06:30.407367 | orchestrator | Saturday 01 November 2025 13:02:54 +0000 (0:00:01.026) 0:08:50.690 ***** 2025-11-01 13:06:30.407372 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.407377 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.407382 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.407386 | orchestrator | 2025-11-01 13:06:30.407391 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-11-01 13:06:30.407396 | orchestrator | Saturday 01 November 2025 13:02:57 +0000 (0:00:02.643) 0:08:53.334 ***** 2025-11-01 13:06:30.407401 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.407405 | orchestrator | 2025-11-01 13:06:30.407410 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-11-01 13:06:30.407415 | orchestrator | Saturday 01 November 2025 13:02:57 +0000 (0:00:00.560) 0:08:53.894 ***** 2025-11-01 13:06:30.407420 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.407425 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.407429 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.407434 | orchestrator | 2025-11-01 13:06:30.407439 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-11-01 13:06:30.407444 | orchestrator | Saturday 01 November 2025 13:02:59 +0000 (0:00:01.578) 0:08:55.473 ***** 2025-11-01 13:06:30.407449 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.407453 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.407458 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.407463 | orchestrator | 2025-11-01 13:06:30.407468 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-11-01 13:06:30.407472 | orchestrator | Saturday 01 November 2025 13:03:00 +0000 (0:00:01.220) 0:08:56.693 ***** 2025-11-01 13:06:30.407477 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.407485 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.407490 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.407494 | orchestrator | 2025-11-01 13:06:30.407499 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-11-01 13:06:30.407504 | orchestrator | Saturday 01 November 2025 13:03:02 +0000 (0:00:01.765) 0:08:58.459 ***** 2025-11-01 13:06:30.407509 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.407513 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.407518 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.407523 | orchestrator | 2025-11-01 13:06:30.407528 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-11-01 13:06:30.407533 | orchestrator | Saturday 01 November 2025 13:03:02 +0000 (0:00:00.358) 0:08:58.818 ***** 2025-11-01 13:06:30.407537 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.407542 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.407547 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.407551 | orchestrator | 2025-11-01 13:06:30.407556 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-11-01 13:06:30.407561 | orchestrator | Saturday 01 November 2025 13:03:03 +0000 (0:00:00.728) 0:08:59.546 ***** 2025-11-01 13:06:30.407566 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-11-01 13:06:30.407571 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-11-01 13:06:30.407575 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-11-01 13:06:30.407580 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-11-01 13:06:30.407585 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-11-01 13:06:30.407593 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-11-01 13:06:30.407598 | orchestrator | 2025-11-01 13:06:30.407603 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-11-01 13:06:30.407607 | orchestrator | Saturday 01 November 2025 13:03:04 +0000 (0:00:01.102) 0:09:00.649 ***** 2025-11-01 13:06:30.407612 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-11-01 13:06:30.407617 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-11-01 13:06:30.407622 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-11-01 13:06:30.407626 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-11-01 13:06:30.407631 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-11-01 13:06:30.407636 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-11-01 13:06:30.407641 | orchestrator | 2025-11-01 13:06:30.407648 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-11-01 13:06:30.407653 | orchestrator | Saturday 01 November 2025 13:03:07 +0000 (0:00:02.313) 0:09:02.962 ***** 2025-11-01 13:06:30.407658 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-11-01 13:06:30.407663 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-11-01 13:06:30.407667 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-11-01 13:06:30.407672 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-11-01 13:06:30.407677 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-11-01 13:06:30.407682 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-11-01 13:06:30.407687 | orchestrator | 2025-11-01 13:06:30.407692 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-11-01 13:06:30.407696 | orchestrator | Saturday 01 November 2025 13:03:10 +0000 (0:00:03.623) 0:09:06.586 ***** 2025-11-01 13:06:30.407701 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.407706 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.407711 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-11-01 13:06:30.407716 | orchestrator | 2025-11-01 13:06:30.407720 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-11-01 13:06:30.407725 | orchestrator | Saturday 01 November 2025 13:03:14 +0000 (0:00:03.761) 0:09:10.348 ***** 2025-11-01 13:06:30.407730 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.407735 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.407740 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-11-01 13:06:30.407745 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-11-01 13:06:30.407749 | orchestrator | 2025-11-01 13:06:30.407754 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-11-01 13:06:30.407759 | orchestrator | Saturday 01 November 2025 13:03:27 +0000 (0:00:12.817) 0:09:23.166 ***** 2025-11-01 13:06:30.407764 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.407769 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.407773 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.407778 | orchestrator | 2025-11-01 13:06:30.407783 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-01 13:06:30.407787 | orchestrator | Saturday 01 November 2025 13:03:28 +0000 (0:00:01.224) 0:09:24.390 ***** 2025-11-01 13:06:30.407792 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.407797 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.407802 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.407806 | orchestrator | 2025-11-01 13:06:30.407811 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-11-01 13:06:30.407816 | orchestrator | Saturday 01 November 2025 13:03:28 +0000 (0:00:00.404) 0:09:24.794 ***** 2025-11-01 13:06:30.407821 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.407826 | orchestrator | 2025-11-01 13:06:30.407830 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-11-01 13:06:30.407835 | orchestrator | Saturday 01 November 2025 13:03:29 +0000 (0:00:00.546) 0:09:25.341 ***** 2025-11-01 13:06:30.407844 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 13:06:30.407849 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 13:06:30.407853 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 13:06:30.407858 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.407863 | orchestrator | 2025-11-01 13:06:30.407868 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-11-01 13:06:30.407872 | orchestrator | Saturday 01 November 2025 13:03:30 +0000 (0:00:01.128) 0:09:26.469 ***** 2025-11-01 13:06:30.407880 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.407885 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.407890 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.407895 | orchestrator | 2025-11-01 13:06:30.407900 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-11-01 13:06:30.407904 | orchestrator | Saturday 01 November 2025 13:03:30 +0000 (0:00:00.356) 0:09:26.825 ***** 2025-11-01 13:06:30.407909 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.407914 | orchestrator | 2025-11-01 13:06:30.407919 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-11-01 13:06:30.407923 | orchestrator | Saturday 01 November 2025 13:03:31 +0000 (0:00:00.254) 0:09:27.080 ***** 2025-11-01 13:06:30.407928 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.407933 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.407938 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.407942 | orchestrator | 2025-11-01 13:06:30.407947 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-11-01 13:06:30.407952 | orchestrator | Saturday 01 November 2025 13:03:31 +0000 (0:00:00.385) 0:09:27.465 ***** 2025-11-01 13:06:30.407957 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.407961 | orchestrator | 2025-11-01 13:06:30.407966 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-11-01 13:06:30.407971 | orchestrator | Saturday 01 November 2025 13:03:31 +0000 (0:00:00.232) 0:09:27.698 ***** 2025-11-01 13:06:30.407975 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.407980 | orchestrator | 2025-11-01 13:06:30.407985 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-11-01 13:06:30.407990 | orchestrator | Saturday 01 November 2025 13:03:32 +0000 (0:00:00.242) 0:09:27.940 ***** 2025-11-01 13:06:30.407995 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.407999 | orchestrator | 2025-11-01 13:06:30.408004 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-11-01 13:06:30.408009 | orchestrator | Saturday 01 November 2025 13:03:32 +0000 (0:00:00.143) 0:09:28.084 ***** 2025-11-01 13:06:30.408014 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.408019 | orchestrator | 2025-11-01 13:06:30.408023 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-11-01 13:06:30.408028 | orchestrator | Saturday 01 November 2025 13:03:32 +0000 (0:00:00.242) 0:09:28.327 ***** 2025-11-01 13:06:30.408035 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.408040 | orchestrator | 2025-11-01 13:06:30.408045 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-11-01 13:06:30.408050 | orchestrator | Saturday 01 November 2025 13:03:33 +0000 (0:00:00.887) 0:09:29.214 ***** 2025-11-01 13:06:30.408054 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 13:06:30.408059 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 13:06:30.408064 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 13:06:30.408069 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.408073 | orchestrator | 2025-11-01 13:06:30.408078 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-11-01 13:06:30.408083 | orchestrator | Saturday 01 November 2025 13:03:33 +0000 (0:00:00.452) 0:09:29.667 ***** 2025-11-01 13:06:30.408088 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.408095 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.408099 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.408104 | orchestrator | 2025-11-01 13:06:30.408109 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-11-01 13:06:30.408114 | orchestrator | Saturday 01 November 2025 13:03:34 +0000 (0:00:00.387) 0:09:30.055 ***** 2025-11-01 13:06:30.408119 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.408123 | orchestrator | 2025-11-01 13:06:30.408128 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-11-01 13:06:30.408133 | orchestrator | Saturday 01 November 2025 13:03:34 +0000 (0:00:00.293) 0:09:30.348 ***** 2025-11-01 13:06:30.408138 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.408143 | orchestrator | 2025-11-01 13:06:30.408147 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-11-01 13:06:30.408152 | orchestrator | 2025-11-01 13:06:30.408157 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-01 13:06:30.408162 | orchestrator | Saturday 01 November 2025 13:03:35 +0000 (0:00:01.046) 0:09:31.394 ***** 2025-11-01 13:06:30.408166 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.408172 | orchestrator | 2025-11-01 13:06:30.408177 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-01 13:06:30.408181 | orchestrator | Saturday 01 November 2025 13:03:36 +0000 (0:00:01.137) 0:09:32.532 ***** 2025-11-01 13:06:30.408186 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.408191 | orchestrator | 2025-11-01 13:06:30.408196 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-01 13:06:30.408212 | orchestrator | Saturday 01 November 2025 13:03:38 +0000 (0:00:01.460) 0:09:33.992 ***** 2025-11-01 13:06:30.408217 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.408222 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.408226 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.408231 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.408236 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.408241 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.408245 | orchestrator | 2025-11-01 13:06:30.408250 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-01 13:06:30.408255 | orchestrator | Saturday 01 November 2025 13:03:39 +0000 (0:00:01.549) 0:09:35.542 ***** 2025-11-01 13:06:30.408260 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.408265 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.408269 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.408274 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.408282 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.408286 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.408291 | orchestrator | 2025-11-01 13:06:30.408296 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-01 13:06:30.408301 | orchestrator | Saturday 01 November 2025 13:03:40 +0000 (0:00:00.737) 0:09:36.279 ***** 2025-11-01 13:06:30.408306 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.408310 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.408315 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.408320 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.408324 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.408329 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.408334 | orchestrator | 2025-11-01 13:06:30.408339 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-01 13:06:30.408343 | orchestrator | Saturday 01 November 2025 13:03:41 +0000 (0:00:01.180) 0:09:37.460 ***** 2025-11-01 13:06:30.408348 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.408356 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.408361 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.408366 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.408370 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.408375 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.408380 | orchestrator | 2025-11-01 13:06:30.408385 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-01 13:06:30.408389 | orchestrator | Saturday 01 November 2025 13:03:42 +0000 (0:00:00.840) 0:09:38.300 ***** 2025-11-01 13:06:30.408394 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.408399 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.408404 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.408408 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.408413 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.408418 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.408423 | orchestrator | 2025-11-01 13:06:30.408427 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-01 13:06:30.408432 | orchestrator | Saturday 01 November 2025 13:03:43 +0000 (0:00:01.404) 0:09:39.705 ***** 2025-11-01 13:06:30.408437 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.408442 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.408446 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.408451 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.408456 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.408463 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.408468 | orchestrator | 2025-11-01 13:06:30.408472 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-01 13:06:30.408477 | orchestrator | Saturday 01 November 2025 13:03:44 +0000 (0:00:00.658) 0:09:40.363 ***** 2025-11-01 13:06:30.408482 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.408487 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.408491 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.408496 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.408501 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.408506 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.408510 | orchestrator | 2025-11-01 13:06:30.408515 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-01 13:06:30.408520 | orchestrator | Saturday 01 November 2025 13:03:45 +0000 (0:00:01.002) 0:09:41.366 ***** 2025-11-01 13:06:30.408525 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.408529 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.408534 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.408539 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.408543 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.408548 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.408553 | orchestrator | 2025-11-01 13:06:30.408558 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-01 13:06:30.408562 | orchestrator | Saturday 01 November 2025 13:03:46 +0000 (0:00:01.098) 0:09:42.465 ***** 2025-11-01 13:06:30.408567 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.408572 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.408577 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.408581 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.408586 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.408591 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.408595 | orchestrator | 2025-11-01 13:06:30.408600 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-01 13:06:30.408605 | orchestrator | Saturday 01 November 2025 13:03:47 +0000 (0:00:01.441) 0:09:43.907 ***** 2025-11-01 13:06:30.408610 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.408614 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.408619 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.408624 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.408629 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.408633 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.408642 | orchestrator | 2025-11-01 13:06:30.408647 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-01 13:06:30.408652 | orchestrator | Saturday 01 November 2025 13:03:48 +0000 (0:00:00.673) 0:09:44.580 ***** 2025-11-01 13:06:30.408656 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.408661 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.408666 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.408671 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.408675 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.408680 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.408685 | orchestrator | 2025-11-01 13:06:30.408690 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-01 13:06:30.408694 | orchestrator | Saturday 01 November 2025 13:03:49 +0000 (0:00:00.985) 0:09:45.566 ***** 2025-11-01 13:06:30.408699 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.408704 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.408708 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.408713 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.408718 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.408723 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.408727 | orchestrator | 2025-11-01 13:06:30.408732 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-01 13:06:30.408737 | orchestrator | Saturday 01 November 2025 13:03:50 +0000 (0:00:00.673) 0:09:46.240 ***** 2025-11-01 13:06:30.408742 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.408746 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.408751 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.408759 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.408764 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.408768 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.408773 | orchestrator | 2025-11-01 13:06:30.408778 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-01 13:06:30.408783 | orchestrator | Saturday 01 November 2025 13:03:51 +0000 (0:00:00.999) 0:09:47.239 ***** 2025-11-01 13:06:30.408788 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.408792 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.408797 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.408802 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.408806 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.408811 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.408816 | orchestrator | 2025-11-01 13:06:30.408821 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-01 13:06:30.408825 | orchestrator | Saturday 01 November 2025 13:03:51 +0000 (0:00:00.653) 0:09:47.893 ***** 2025-11-01 13:06:30.408830 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.408835 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.408840 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.408844 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.408849 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.408854 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.408859 | orchestrator | 2025-11-01 13:06:30.408863 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-01 13:06:30.408868 | orchestrator | Saturday 01 November 2025 13:03:52 +0000 (0:00:00.984) 0:09:48.877 ***** 2025-11-01 13:06:30.408873 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.408878 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.408882 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.408887 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:06:30.408892 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:06:30.408897 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:06:30.408901 | orchestrator | 2025-11-01 13:06:30.408906 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-01 13:06:30.408911 | orchestrator | Saturday 01 November 2025 13:03:53 +0000 (0:00:00.678) 0:09:49.555 ***** 2025-11-01 13:06:30.408919 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.408924 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.408929 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.408935 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.408940 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.408945 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.408950 | orchestrator | 2025-11-01 13:06:30.408955 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-01 13:06:30.408959 | orchestrator | Saturday 01 November 2025 13:03:54 +0000 (0:00:01.037) 0:09:50.593 ***** 2025-11-01 13:06:30.408964 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.408969 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.408974 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.408978 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.408983 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.408988 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.408993 | orchestrator | 2025-11-01 13:06:30.408997 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-01 13:06:30.409002 | orchestrator | Saturday 01 November 2025 13:03:55 +0000 (0:00:00.854) 0:09:51.447 ***** 2025-11-01 13:06:30.409007 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.409012 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.409016 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.409021 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.409026 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.409030 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.409035 | orchestrator | 2025-11-01 13:06:30.409040 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-11-01 13:06:30.409045 | orchestrator | Saturday 01 November 2025 13:03:57 +0000 (0:00:01.578) 0:09:53.026 ***** 2025-11-01 13:06:30.409049 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-01 13:06:30.409054 | orchestrator | 2025-11-01 13:06:30.409059 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-11-01 13:06:30.409064 | orchestrator | Saturday 01 November 2025 13:04:01 +0000 (0:00:04.292) 0:09:57.318 ***** 2025-11-01 13:06:30.409069 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-01 13:06:30.409073 | orchestrator | 2025-11-01 13:06:30.409078 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-11-01 13:06:30.409083 | orchestrator | Saturday 01 November 2025 13:04:03 +0000 (0:00:02.470) 0:09:59.788 ***** 2025-11-01 13:06:30.409088 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.409093 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.409097 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.409102 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.409107 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.409112 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.409116 | orchestrator | 2025-11-01 13:06:30.409121 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-11-01 13:06:30.409126 | orchestrator | Saturday 01 November 2025 13:04:05 +0000 (0:00:01.662) 0:10:01.451 ***** 2025-11-01 13:06:30.409131 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.409135 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.409140 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.409145 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.409150 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.409154 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.409159 | orchestrator | 2025-11-01 13:06:30.409164 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-11-01 13:06:30.409169 | orchestrator | Saturday 01 November 2025 13:04:06 +0000 (0:00:00.892) 0:10:02.343 ***** 2025-11-01 13:06:30.409174 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.409178 | orchestrator | 2025-11-01 13:06:30.409188 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-11-01 13:06:30.409193 | orchestrator | Saturday 01 November 2025 13:04:07 +0000 (0:00:01.505) 0:10:03.849 ***** 2025-11-01 13:06:30.409198 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.409214 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.409219 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.409224 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.409229 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.409234 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.409238 | orchestrator | 2025-11-01 13:06:30.409243 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-11-01 13:06:30.409248 | orchestrator | Saturday 01 November 2025 13:04:10 +0000 (0:00:02.299) 0:10:06.149 ***** 2025-11-01 13:06:30.409253 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.409258 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.409262 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.409267 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.409272 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.409276 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.409281 | orchestrator | 2025-11-01 13:06:30.409286 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-11-01 13:06:30.409291 | orchestrator | Saturday 01 November 2025 13:04:13 +0000 (0:00:03.487) 0:10:09.636 ***** 2025-11-01 13:06:30.409296 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:06:30.409301 | orchestrator | 2025-11-01 13:06:30.409306 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-11-01 13:06:30.409310 | orchestrator | Saturday 01 November 2025 13:04:15 +0000 (0:00:01.514) 0:10:11.150 ***** 2025-11-01 13:06:30.409315 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.409320 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.409325 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.409329 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.409334 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.409339 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.409344 | orchestrator | 2025-11-01 13:06:30.409348 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-11-01 13:06:30.409353 | orchestrator | Saturday 01 November 2025 13:04:16 +0000 (0:00:00.934) 0:10:12.085 ***** 2025-11-01 13:06:30.409358 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.409363 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.409367 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.409374 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:06:30.409379 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:06:30.409384 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:06:30.409389 | orchestrator | 2025-11-01 13:06:30.409393 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-11-01 13:06:30.409398 | orchestrator | Saturday 01 November 2025 13:04:18 +0000 (0:00:02.190) 0:10:14.275 ***** 2025-11-01 13:06:30.409403 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.409408 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.409412 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.409417 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:06:30.409422 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:06:30.409427 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:06:30.409431 | orchestrator | 2025-11-01 13:06:30.409436 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-11-01 13:06:30.409441 | orchestrator | 2025-11-01 13:06:30.409446 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-01 13:06:30.409451 | orchestrator | Saturday 01 November 2025 13:04:19 +0000 (0:00:01.184) 0:10:15.460 ***** 2025-11-01 13:06:30.409455 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.409464 | orchestrator | 2025-11-01 13:06:30.409469 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-01 13:06:30.409474 | orchestrator | Saturday 01 November 2025 13:04:20 +0000 (0:00:00.625) 0:10:16.085 ***** 2025-11-01 13:06:30.409478 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.409483 | orchestrator | 2025-11-01 13:06:30.409488 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-01 13:06:30.409493 | orchestrator | Saturday 01 November 2025 13:04:21 +0000 (0:00:00.882) 0:10:16.968 ***** 2025-11-01 13:06:30.409498 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.409502 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.409507 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.409512 | orchestrator | 2025-11-01 13:06:30.409517 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-01 13:06:30.409521 | orchestrator | Saturday 01 November 2025 13:04:21 +0000 (0:00:00.398) 0:10:17.366 ***** 2025-11-01 13:06:30.409526 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.409531 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.409536 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.409541 | orchestrator | 2025-11-01 13:06:30.409545 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-01 13:06:30.409550 | orchestrator | Saturday 01 November 2025 13:04:22 +0000 (0:00:00.975) 0:10:18.342 ***** 2025-11-01 13:06:30.409555 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.409560 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.409564 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.409569 | orchestrator | 2025-11-01 13:06:30.409574 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-01 13:06:30.409579 | orchestrator | Saturday 01 November 2025 13:04:23 +0000 (0:00:01.049) 0:10:19.391 ***** 2025-11-01 13:06:30.409583 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.409588 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.409593 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.409598 | orchestrator | 2025-11-01 13:06:30.409602 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-01 13:06:30.409607 | orchestrator | Saturday 01 November 2025 13:04:24 +0000 (0:00:00.735) 0:10:20.127 ***** 2025-11-01 13:06:30.409612 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.409616 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.409621 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.409626 | orchestrator | 2025-11-01 13:06:30.409631 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-01 13:06:30.409638 | orchestrator | Saturday 01 November 2025 13:04:24 +0000 (0:00:00.339) 0:10:20.466 ***** 2025-11-01 13:06:30.409643 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.409648 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.409653 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.409657 | orchestrator | 2025-11-01 13:06:30.409662 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-01 13:06:30.409667 | orchestrator | Saturday 01 November 2025 13:04:24 +0000 (0:00:00.321) 0:10:20.788 ***** 2025-11-01 13:06:30.409672 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.409677 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.409681 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.409686 | orchestrator | 2025-11-01 13:06:30.409691 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-01 13:06:30.409696 | orchestrator | Saturday 01 November 2025 13:04:25 +0000 (0:00:00.650) 0:10:21.439 ***** 2025-11-01 13:06:30.409700 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.409705 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.409710 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.409715 | orchestrator | 2025-11-01 13:06:30.409719 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-01 13:06:30.409729 | orchestrator | Saturday 01 November 2025 13:04:26 +0000 (0:00:00.769) 0:10:22.208 ***** 2025-11-01 13:06:30.409734 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.409739 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.409744 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.409748 | orchestrator | 2025-11-01 13:06:30.409753 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-01 13:06:30.409758 | orchestrator | Saturday 01 November 2025 13:04:27 +0000 (0:00:00.767) 0:10:22.976 ***** 2025-11-01 13:06:30.409763 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.409767 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.409772 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.409777 | orchestrator | 2025-11-01 13:06:30.409782 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-01 13:06:30.409786 | orchestrator | Saturday 01 November 2025 13:04:27 +0000 (0:00:00.331) 0:10:23.308 ***** 2025-11-01 13:06:30.409791 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.409796 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.409801 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.409806 | orchestrator | 2025-11-01 13:06:30.409812 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-01 13:06:30.409817 | orchestrator | Saturday 01 November 2025 13:04:28 +0000 (0:00:00.698) 0:10:24.006 ***** 2025-11-01 13:06:30.409822 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.409827 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.409831 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.409836 | orchestrator | 2025-11-01 13:06:30.409841 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-01 13:06:30.409846 | orchestrator | Saturday 01 November 2025 13:04:28 +0000 (0:00:00.374) 0:10:24.381 ***** 2025-11-01 13:06:30.409850 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.409855 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.409860 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.409864 | orchestrator | 2025-11-01 13:06:30.409869 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-01 13:06:30.409874 | orchestrator | Saturday 01 November 2025 13:04:28 +0000 (0:00:00.355) 0:10:24.736 ***** 2025-11-01 13:06:30.409879 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.409884 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.409888 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.409893 | orchestrator | 2025-11-01 13:06:30.409898 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-01 13:06:30.409902 | orchestrator | Saturday 01 November 2025 13:04:29 +0000 (0:00:00.347) 0:10:25.084 ***** 2025-11-01 13:06:30.409907 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.409912 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.409917 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.409921 | orchestrator | 2025-11-01 13:06:30.409926 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-01 13:06:30.409931 | orchestrator | Saturday 01 November 2025 13:04:29 +0000 (0:00:00.632) 0:10:25.717 ***** 2025-11-01 13:06:30.409936 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.409941 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.409945 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.409950 | orchestrator | 2025-11-01 13:06:30.409955 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-01 13:06:30.409960 | orchestrator | Saturday 01 November 2025 13:04:30 +0000 (0:00:00.336) 0:10:26.053 ***** 2025-11-01 13:06:30.409964 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.409969 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.409974 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.409979 | orchestrator | 2025-11-01 13:06:30.409983 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-01 13:06:30.409988 | orchestrator | Saturday 01 November 2025 13:04:30 +0000 (0:00:00.382) 0:10:26.436 ***** 2025-11-01 13:06:30.409996 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.410001 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.410006 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.410011 | orchestrator | 2025-11-01 13:06:30.410031 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-01 13:06:30.410036 | orchestrator | Saturday 01 November 2025 13:04:30 +0000 (0:00:00.350) 0:10:26.787 ***** 2025-11-01 13:06:30.410041 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.410046 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.410051 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.410055 | orchestrator | 2025-11-01 13:06:30.410060 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-11-01 13:06:30.410065 | orchestrator | Saturday 01 November 2025 13:04:31 +0000 (0:00:00.912) 0:10:27.699 ***** 2025-11-01 13:06:30.410070 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.410075 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.410079 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-11-01 13:06:30.410084 | orchestrator | 2025-11-01 13:06:30.410089 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-11-01 13:06:30.410097 | orchestrator | Saturday 01 November 2025 13:04:32 +0000 (0:00:00.436) 0:10:28.136 ***** 2025-11-01 13:06:30.410102 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-01 13:06:30.410106 | orchestrator | 2025-11-01 13:06:30.410111 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-11-01 13:06:30.410116 | orchestrator | Saturday 01 November 2025 13:04:34 +0000 (0:00:02.409) 0:10:30.545 ***** 2025-11-01 13:06:30.410121 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-11-01 13:06:30.410127 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.410132 | orchestrator | 2025-11-01 13:06:30.410137 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-11-01 13:06:30.410141 | orchestrator | Saturday 01 November 2025 13:04:34 +0000 (0:00:00.216) 0:10:30.762 ***** 2025-11-01 13:06:30.410147 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-01 13:06:30.410156 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-01 13:06:30.410161 | orchestrator | 2025-11-01 13:06:30.410166 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-11-01 13:06:30.410171 | orchestrator | Saturday 01 November 2025 13:04:44 +0000 (0:00:09.251) 0:10:40.013 ***** 2025-11-01 13:06:30.410176 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-01 13:06:30.410180 | orchestrator | 2025-11-01 13:06:30.410188 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-11-01 13:06:30.410193 | orchestrator | Saturday 01 November 2025 13:04:48 +0000 (0:00:04.173) 0:10:44.187 ***** 2025-11-01 13:06:30.410198 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.410226 | orchestrator | 2025-11-01 13:06:30.410231 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-11-01 13:06:30.410236 | orchestrator | Saturday 01 November 2025 13:04:48 +0000 (0:00:00.664) 0:10:44.851 ***** 2025-11-01 13:06:30.410241 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-11-01 13:06:30.410245 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-11-01 13:06:30.410254 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-11-01 13:06:30.410259 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-11-01 13:06:30.410264 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-11-01 13:06:30.410268 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-11-01 13:06:30.410273 | orchestrator | 2025-11-01 13:06:30.410278 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-11-01 13:06:30.410283 | orchestrator | Saturday 01 November 2025 13:04:50 +0000 (0:00:01.082) 0:10:45.934 ***** 2025-11-01 13:06:30.410287 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:06:30.410292 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-01 13:06:30.410297 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-01 13:06:30.410302 | orchestrator | 2025-11-01 13:06:30.410306 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-11-01 13:06:30.410311 | orchestrator | Saturday 01 November 2025 13:04:52 +0000 (0:00:02.677) 0:10:48.611 ***** 2025-11-01 13:06:30.410316 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-01 13:06:30.410321 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-01 13:06:30.410326 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.410330 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-01 13:06:30.410335 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-11-01 13:06:30.410340 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.410345 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-01 13:06:30.410349 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-11-01 13:06:30.410354 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.410359 | orchestrator | 2025-11-01 13:06:30.410364 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-11-01 13:06:30.410368 | orchestrator | Saturday 01 November 2025 13:04:54 +0000 (0:00:01.887) 0:10:50.499 ***** 2025-11-01 13:06:30.410373 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.410378 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.410383 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.410387 | orchestrator | 2025-11-01 13:06:30.410392 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-11-01 13:06:30.410397 | orchestrator | Saturday 01 November 2025 13:04:57 +0000 (0:00:02.608) 0:10:53.107 ***** 2025-11-01 13:06:30.410402 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.410406 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.410411 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.410416 | orchestrator | 2025-11-01 13:06:30.410421 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-11-01 13:06:30.410426 | orchestrator | Saturday 01 November 2025 13:04:57 +0000 (0:00:00.384) 0:10:53.491 ***** 2025-11-01 13:06:30.410433 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.410438 | orchestrator | 2025-11-01 13:06:30.410443 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-11-01 13:06:30.410448 | orchestrator | Saturday 01 November 2025 13:04:58 +0000 (0:00:00.974) 0:10:54.466 ***** 2025-11-01 13:06:30.410453 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.410457 | orchestrator | 2025-11-01 13:06:30.410462 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-11-01 13:06:30.410467 | orchestrator | Saturday 01 November 2025 13:04:59 +0000 (0:00:00.759) 0:10:55.226 ***** 2025-11-01 13:06:30.410472 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.410477 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.410481 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.410490 | orchestrator | 2025-11-01 13:06:30.410495 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-11-01 13:06:30.410500 | orchestrator | Saturday 01 November 2025 13:05:00 +0000 (0:00:01.343) 0:10:56.569 ***** 2025-11-01 13:06:30.410505 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.410509 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.410514 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.410519 | orchestrator | 2025-11-01 13:06:30.410524 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-11-01 13:06:30.410528 | orchestrator | Saturday 01 November 2025 13:05:02 +0000 (0:00:01.544) 0:10:58.114 ***** 2025-11-01 13:06:30.410533 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.410537 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.410542 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.410546 | orchestrator | 2025-11-01 13:06:30.410551 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-11-01 13:06:30.410555 | orchestrator | Saturday 01 November 2025 13:05:04 +0000 (0:00:02.135) 0:11:00.249 ***** 2025-11-01 13:06:30.410560 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.410565 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.410569 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.410574 | orchestrator | 2025-11-01 13:06:30.410581 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-11-01 13:06:30.410585 | orchestrator | Saturday 01 November 2025 13:05:06 +0000 (0:00:02.279) 0:11:02.528 ***** 2025-11-01 13:06:30.410590 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.410594 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.410599 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.410603 | orchestrator | 2025-11-01 13:06:30.410608 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-01 13:06:30.410612 | orchestrator | Saturday 01 November 2025 13:05:08 +0000 (0:00:01.902) 0:11:04.431 ***** 2025-11-01 13:06:30.410617 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.410621 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.410626 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.410630 | orchestrator | 2025-11-01 13:06:30.410635 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-11-01 13:06:30.410639 | orchestrator | Saturday 01 November 2025 13:05:09 +0000 (0:00:00.911) 0:11:05.343 ***** 2025-11-01 13:06:30.410644 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.410649 | orchestrator | 2025-11-01 13:06:30.410653 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-11-01 13:06:30.410658 | orchestrator | Saturday 01 November 2025 13:05:10 +0000 (0:00:00.929) 0:11:06.272 ***** 2025-11-01 13:06:30.410662 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.410667 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.410671 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.410676 | orchestrator | 2025-11-01 13:06:30.410680 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-11-01 13:06:30.410685 | orchestrator | Saturday 01 November 2025 13:05:10 +0000 (0:00:00.387) 0:11:06.660 ***** 2025-11-01 13:06:30.410689 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.410694 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.410698 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.410703 | orchestrator | 2025-11-01 13:06:30.410707 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-11-01 13:06:30.410712 | orchestrator | Saturday 01 November 2025 13:05:12 +0000 (0:00:01.463) 0:11:08.123 ***** 2025-11-01 13:06:30.410716 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 13:06:30.410721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 13:06:30.410725 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 13:06:30.410730 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.410738 | orchestrator | 2025-11-01 13:06:30.410742 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-11-01 13:06:30.410747 | orchestrator | Saturday 01 November 2025 13:05:13 +0000 (0:00:01.189) 0:11:09.313 ***** 2025-11-01 13:06:30.410751 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.410756 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.410760 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.410765 | orchestrator | 2025-11-01 13:06:30.410770 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-11-01 13:06:30.410774 | orchestrator | 2025-11-01 13:06:30.410779 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-11-01 13:06:30.410783 | orchestrator | Saturday 01 November 2025 13:05:14 +0000 (0:00:00.810) 0:11:10.124 ***** 2025-11-01 13:06:30.410788 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.410792 | orchestrator | 2025-11-01 13:06:30.410797 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-11-01 13:06:30.410801 | orchestrator | Saturday 01 November 2025 13:05:14 +0000 (0:00:00.501) 0:11:10.625 ***** 2025-11-01 13:06:30.410809 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.410813 | orchestrator | 2025-11-01 13:06:30.410818 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-11-01 13:06:30.410822 | orchestrator | Saturday 01 November 2025 13:05:15 +0000 (0:00:00.723) 0:11:11.348 ***** 2025-11-01 13:06:30.410827 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.410831 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.410836 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.410840 | orchestrator | 2025-11-01 13:06:30.410845 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-11-01 13:06:30.410849 | orchestrator | Saturday 01 November 2025 13:05:15 +0000 (0:00:00.324) 0:11:11.673 ***** 2025-11-01 13:06:30.410854 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.410858 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.410863 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.410867 | orchestrator | 2025-11-01 13:06:30.410872 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-11-01 13:06:30.410876 | orchestrator | Saturday 01 November 2025 13:05:16 +0000 (0:00:00.693) 0:11:12.366 ***** 2025-11-01 13:06:30.410881 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.410885 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.410890 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.410894 | orchestrator | 2025-11-01 13:06:30.410899 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-11-01 13:06:30.410903 | orchestrator | Saturday 01 November 2025 13:05:17 +0000 (0:00:00.979) 0:11:13.346 ***** 2025-11-01 13:06:30.410908 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.410912 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.410917 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.410921 | orchestrator | 2025-11-01 13:06:30.410926 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-11-01 13:06:30.410930 | orchestrator | Saturday 01 November 2025 13:05:18 +0000 (0:00:00.764) 0:11:14.110 ***** 2025-11-01 13:06:30.410935 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.410939 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.410944 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.410948 | orchestrator | 2025-11-01 13:06:30.410953 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-11-01 13:06:30.410960 | orchestrator | Saturday 01 November 2025 13:05:18 +0000 (0:00:00.394) 0:11:14.505 ***** 2025-11-01 13:06:30.410964 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.410969 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.410974 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.410982 | orchestrator | 2025-11-01 13:06:30.410986 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-11-01 13:06:30.410991 | orchestrator | Saturday 01 November 2025 13:05:18 +0000 (0:00:00.401) 0:11:14.906 ***** 2025-11-01 13:06:30.410995 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.411000 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.411004 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.411009 | orchestrator | 2025-11-01 13:06:30.411013 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-11-01 13:06:30.411018 | orchestrator | Saturday 01 November 2025 13:05:19 +0000 (0:00:00.531) 0:11:15.438 ***** 2025-11-01 13:06:30.411023 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.411027 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.411032 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.411036 | orchestrator | 2025-11-01 13:06:30.411040 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-11-01 13:06:30.411045 | orchestrator | Saturday 01 November 2025 13:05:20 +0000 (0:00:00.744) 0:11:16.182 ***** 2025-11-01 13:06:30.411050 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.411054 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.411059 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.411063 | orchestrator | 2025-11-01 13:06:30.411068 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-11-01 13:06:30.411072 | orchestrator | Saturday 01 November 2025 13:05:20 +0000 (0:00:00.702) 0:11:16.885 ***** 2025-11-01 13:06:30.411077 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.411081 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.411085 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.411090 | orchestrator | 2025-11-01 13:06:30.411094 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-11-01 13:06:30.411099 | orchestrator | Saturday 01 November 2025 13:05:21 +0000 (0:00:00.315) 0:11:17.201 ***** 2025-11-01 13:06:30.411104 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.411108 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.411113 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.411117 | orchestrator | 2025-11-01 13:06:30.411121 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-11-01 13:06:30.411126 | orchestrator | Saturday 01 November 2025 13:05:21 +0000 (0:00:00.356) 0:11:17.557 ***** 2025-11-01 13:06:30.411131 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.411135 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.411139 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.411144 | orchestrator | 2025-11-01 13:06:30.411148 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-11-01 13:06:30.411153 | orchestrator | Saturday 01 November 2025 13:05:22 +0000 (0:00:00.678) 0:11:18.235 ***** 2025-11-01 13:06:30.411158 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.411162 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.411167 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.411171 | orchestrator | 2025-11-01 13:06:30.411175 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-11-01 13:06:30.411180 | orchestrator | Saturday 01 November 2025 13:05:22 +0000 (0:00:00.393) 0:11:18.629 ***** 2025-11-01 13:06:30.411184 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.411189 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.411193 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.411198 | orchestrator | 2025-11-01 13:06:30.411213 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-11-01 13:06:30.411218 | orchestrator | Saturday 01 November 2025 13:05:23 +0000 (0:00:00.379) 0:11:19.009 ***** 2025-11-01 13:06:30.411222 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.411227 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.411231 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.411236 | orchestrator | 2025-11-01 13:06:30.411243 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-11-01 13:06:30.411251 | orchestrator | Saturday 01 November 2025 13:05:23 +0000 (0:00:00.320) 0:11:19.329 ***** 2025-11-01 13:06:30.411256 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.411260 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.411265 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.411269 | orchestrator | 2025-11-01 13:06:30.411274 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-11-01 13:06:30.411278 | orchestrator | Saturday 01 November 2025 13:05:24 +0000 (0:00:00.665) 0:11:19.995 ***** 2025-11-01 13:06:30.411283 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.411287 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.411292 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.411296 | orchestrator | 2025-11-01 13:06:30.411301 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-11-01 13:06:30.411305 | orchestrator | Saturday 01 November 2025 13:05:24 +0000 (0:00:00.334) 0:11:20.330 ***** 2025-11-01 13:06:30.411310 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.411315 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.411319 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.411323 | orchestrator | 2025-11-01 13:06:30.411328 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-11-01 13:06:30.411333 | orchestrator | Saturday 01 November 2025 13:05:24 +0000 (0:00:00.364) 0:11:20.694 ***** 2025-11-01 13:06:30.411337 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.411342 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.411346 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.411350 | orchestrator | 2025-11-01 13:06:30.411355 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-11-01 13:06:30.411360 | orchestrator | Saturday 01 November 2025 13:05:25 +0000 (0:00:00.874) 0:11:21.568 ***** 2025-11-01 13:06:30.411364 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.411369 | orchestrator | 2025-11-01 13:06:30.411373 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-11-01 13:06:30.411378 | orchestrator | Saturday 01 November 2025 13:05:26 +0000 (0:00:00.620) 0:11:22.189 ***** 2025-11-01 13:06:30.411384 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:06:30.411389 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-01 13:06:30.411394 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-01 13:06:30.411398 | orchestrator | 2025-11-01 13:06:30.411403 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-11-01 13:06:30.411407 | orchestrator | Saturday 01 November 2025 13:05:28 +0000 (0:00:02.325) 0:11:24.515 ***** 2025-11-01 13:06:30.411412 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-01 13:06:30.411417 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-11-01 13:06:30.411421 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.411426 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-01 13:06:30.411430 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-11-01 13:06:30.411435 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.411439 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-01 13:06:30.411444 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-11-01 13:06:30.411448 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.411453 | orchestrator | 2025-11-01 13:06:30.411457 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-11-01 13:06:30.411462 | orchestrator | Saturday 01 November 2025 13:05:30 +0000 (0:00:01.486) 0:11:26.001 ***** 2025-11-01 13:06:30.411466 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.411471 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.411475 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.411480 | orchestrator | 2025-11-01 13:06:30.411484 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-11-01 13:06:30.411492 | orchestrator | Saturday 01 November 2025 13:05:30 +0000 (0:00:00.389) 0:11:26.391 ***** 2025-11-01 13:06:30.411497 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.411501 | orchestrator | 2025-11-01 13:06:30.411506 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-11-01 13:06:30.411510 | orchestrator | Saturday 01 November 2025 13:05:31 +0000 (0:00:00.584) 0:11:26.975 ***** 2025-11-01 13:06:30.411515 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-01 13:06:30.411520 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-01 13:06:30.411524 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-01 13:06:30.411529 | orchestrator | 2025-11-01 13:06:30.411534 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-11-01 13:06:30.411538 | orchestrator | Saturday 01 November 2025 13:05:32 +0000 (0:00:01.438) 0:11:28.414 ***** 2025-11-01 13:06:30.411543 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:06:30.411547 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-11-01 13:06:30.411552 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:06:30.411559 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-11-01 13:06:30.411564 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:06:30.411568 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-11-01 13:06:30.411573 | orchestrator | 2025-11-01 13:06:30.411577 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-11-01 13:06:30.411582 | orchestrator | Saturday 01 November 2025 13:05:37 +0000 (0:00:04.705) 0:11:33.119 ***** 2025-11-01 13:06:30.411586 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:06:30.411591 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-01 13:06:30.411595 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:06:30.411600 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-01 13:06:30.411604 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:06:30.411609 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-01 13:06:30.411613 | orchestrator | 2025-11-01 13:06:30.411618 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-11-01 13:06:30.411622 | orchestrator | Saturday 01 November 2025 13:05:39 +0000 (0:00:02.433) 0:11:35.552 ***** 2025-11-01 13:06:30.411627 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-01 13:06:30.411631 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.411636 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-01 13:06:30.411640 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.411645 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-01 13:06:30.411649 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.411654 | orchestrator | 2025-11-01 13:06:30.411658 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-11-01 13:06:30.411663 | orchestrator | Saturday 01 November 2025 13:05:40 +0000 (0:00:01.199) 0:11:36.752 ***** 2025-11-01 13:06:30.411669 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-11-01 13:06:30.411677 | orchestrator | 2025-11-01 13:06:30.411682 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-11-01 13:06:30.411686 | orchestrator | Saturday 01 November 2025 13:05:41 +0000 (0:00:00.246) 0:11:36.999 ***** 2025-11-01 13:06:30.411691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-01 13:06:30.411696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-01 13:06:30.411700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-01 13:06:30.411705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-01 13:06:30.411710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-01 13:06:30.411714 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.411719 | orchestrator | 2025-11-01 13:06:30.411723 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-11-01 13:06:30.411728 | orchestrator | Saturday 01 November 2025 13:05:42 +0000 (0:00:01.227) 0:11:38.226 ***** 2025-11-01 13:06:30.411732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-01 13:06:30.411737 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-01 13:06:30.411742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-01 13:06:30.411746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-01 13:06:30.411751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-11-01 13:06:30.411755 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.411760 | orchestrator | 2025-11-01 13:06:30.411764 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-11-01 13:06:30.411769 | orchestrator | Saturday 01 November 2025 13:05:43 +0000 (0:00:00.756) 0:11:38.983 ***** 2025-11-01 13:06:30.411773 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-01 13:06:30.411778 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-01 13:06:30.411783 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-01 13:06:30.411790 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-01 13:06:30.411794 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-11-01 13:06:30.411799 | orchestrator | 2025-11-01 13:06:30.411804 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-11-01 13:06:30.411808 | orchestrator | Saturday 01 November 2025 13:06:15 +0000 (0:00:32.466) 0:12:11.450 ***** 2025-11-01 13:06:30.411813 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.411817 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.411822 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.411826 | orchestrator | 2025-11-01 13:06:30.411831 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-11-01 13:06:30.411840 | orchestrator | Saturday 01 November 2025 13:06:15 +0000 (0:00:00.378) 0:12:11.828 ***** 2025-11-01 13:06:30.411844 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.411849 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.411853 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.411858 | orchestrator | 2025-11-01 13:06:30.411863 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-11-01 13:06:30.411867 | orchestrator | Saturday 01 November 2025 13:06:16 +0000 (0:00:00.376) 0:12:12.204 ***** 2025-11-01 13:06:30.411872 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.411876 | orchestrator | 2025-11-01 13:06:30.411881 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-11-01 13:06:30.411885 | orchestrator | Saturday 01 November 2025 13:06:17 +0000 (0:00:00.955) 0:12:13.160 ***** 2025-11-01 13:06:30.411890 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.411894 | orchestrator | 2025-11-01 13:06:30.411899 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-11-01 13:06:30.411903 | orchestrator | Saturday 01 November 2025 13:06:17 +0000 (0:00:00.606) 0:12:13.767 ***** 2025-11-01 13:06:30.411910 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.411915 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.411919 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.411924 | orchestrator | 2025-11-01 13:06:30.411928 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-11-01 13:06:30.411933 | orchestrator | Saturday 01 November 2025 13:06:19 +0000 (0:00:01.340) 0:12:15.108 ***** 2025-11-01 13:06:30.411937 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.411942 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.411946 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.411951 | orchestrator | 2025-11-01 13:06:30.411955 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-11-01 13:06:30.411960 | orchestrator | Saturday 01 November 2025 13:06:20 +0000 (0:00:01.619) 0:12:16.727 ***** 2025-11-01 13:06:30.411965 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:06:30.411969 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:06:30.411974 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:06:30.411978 | orchestrator | 2025-11-01 13:06:30.411983 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-11-01 13:06:30.411987 | orchestrator | Saturday 01 November 2025 13:06:22 +0000 (0:00:01.918) 0:12:18.646 ***** 2025-11-01 13:06:30.411992 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-11-01 13:06:30.411996 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-11-01 13:06:30.412001 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-11-01 13:06:30.412005 | orchestrator | 2025-11-01 13:06:30.412010 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-11-01 13:06:30.412014 | orchestrator | Saturday 01 November 2025 13:06:25 +0000 (0:00:02.842) 0:12:21.488 ***** 2025-11-01 13:06:30.412019 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.412024 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.412028 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.412032 | orchestrator | 2025-11-01 13:06:30.412037 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-11-01 13:06:30.412042 | orchestrator | Saturday 01 November 2025 13:06:25 +0000 (0:00:00.383) 0:12:21.872 ***** 2025-11-01 13:06:30.412046 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:06:30.412054 | orchestrator | 2025-11-01 13:06:30.412058 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-11-01 13:06:30.412063 | orchestrator | Saturday 01 November 2025 13:06:26 +0000 (0:00:00.587) 0:12:22.460 ***** 2025-11-01 13:06:30.412067 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.412072 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.412076 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.412081 | orchestrator | 2025-11-01 13:06:30.412086 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-11-01 13:06:30.412090 | orchestrator | Saturday 01 November 2025 13:06:27 +0000 (0:00:00.657) 0:12:23.117 ***** 2025-11-01 13:06:30.412094 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.412099 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:06:30.412104 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:06:30.412108 | orchestrator | 2025-11-01 13:06:30.412113 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-11-01 13:06:30.412117 | orchestrator | Saturday 01 November 2025 13:06:27 +0000 (0:00:00.380) 0:12:23.498 ***** 2025-11-01 13:06:30.412124 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 13:06:30.412129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 13:06:30.412134 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 13:06:30.412138 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:06:30.412143 | orchestrator | 2025-11-01 13:06:30.412147 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-11-01 13:06:30.412152 | orchestrator | Saturday 01 November 2025 13:06:28 +0000 (0:00:00.708) 0:12:24.206 ***** 2025-11-01 13:06:30.412156 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:06:30.412161 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:06:30.412165 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:06:30.412170 | orchestrator | 2025-11-01 13:06:30.412174 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:06:30.412179 | orchestrator | testbed-node-0 : ok=134  changed=34  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-11-01 13:06:30.412184 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-11-01 13:06:30.412188 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-11-01 13:06:30.412193 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-11-01 13:06:30.412197 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-11-01 13:06:30.412212 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-11-01 13:06:30.412217 | orchestrator | 2025-11-01 13:06:30.412221 | orchestrator | 2025-11-01 13:06:30.412226 | orchestrator | 2025-11-01 13:06:30.412233 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:06:30.412237 | orchestrator | Saturday 01 November 2025 13:06:28 +0000 (0:00:00.273) 0:12:24.480 ***** 2025-11-01 13:06:30.412242 | orchestrator | =============================================================================== 2025-11-01 13:06:30.412246 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 47.23s 2025-11-01 13:06:30.412251 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 44.00s 2025-11-01 13:06:30.412255 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.47s 2025-11-01 13:06:30.412260 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.54s 2025-11-01 13:06:30.412268 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.99s 2025-11-01 13:06:30.412272 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.44s 2025-11-01 13:06:30.412277 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.82s 2025-11-01 13:06:30.412281 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.37s 2025-11-01 13:06:30.412286 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.43s 2025-11-01 13:06:30.412290 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 9.25s 2025-11-01 13:06:30.412295 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.00s 2025-11-01 13:06:30.412299 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.75s 2025-11-01 13:06:30.412304 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 5.75s 2025-11-01 13:06:30.412309 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.56s 2025-11-01 13:06:30.412313 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.71s 2025-11-01 13:06:30.412317 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 4.50s 2025-11-01 13:06:30.412322 | orchestrator | ceph-facts : Set_fact _container_exec_cmd ------------------------------- 4.38s 2025-11-01 13:06:30.412327 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.29s 2025-11-01 13:06:30.412331 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 4.25s 2025-11-01 13:06:30.412336 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 4.24s 2025-11-01 13:06:30.412340 | orchestrator | 2025-11-01 13:06:30 | INFO  | Task b4eaea7f-852e-4920-a182-bbd169b3d5b2 is in state SUCCESS 2025-11-01 13:06:30.412345 | orchestrator | 2025-11-01 13:06:30 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:06:30.412349 | orchestrator | 2025-11-01 13:06:30 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:06:30.412354 | orchestrator | 2025-11-01 13:06:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:06:33.436092 | orchestrator | 2025-11-01 13:06:33 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:06:33.437246 | orchestrator | 2025-11-01 13:06:33 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:06:33.438804 | orchestrator | 2025-11-01 13:06:33 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:06:33.438826 | orchestrator | 2025-11-01 13:06:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:06:36.486712 | orchestrator | 2025-11-01 13:06:36 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:06:36.487958 | orchestrator | 2025-11-01 13:06:36 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:06:36.490155 | orchestrator | 2025-11-01 13:06:36 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:06:36.490184 | orchestrator | 2025-11-01 13:06:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:06:39.534056 | orchestrator | 2025-11-01 13:06:39 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:06:39.536746 | orchestrator | 2025-11-01 13:06:39 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:06:39.538554 | orchestrator | 2025-11-01 13:06:39 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:06:39.538578 | orchestrator | 2025-11-01 13:06:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:06:42.584075 | orchestrator | 2025-11-01 13:06:42 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:06:42.587459 | orchestrator | 2025-11-01 13:06:42 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:06:42.589897 | orchestrator | 2025-11-01 13:06:42 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:06:42.589921 | orchestrator | 2025-11-01 13:06:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:06:45.631581 | orchestrator | 2025-11-01 13:06:45 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:06:45.633764 | orchestrator | 2025-11-01 13:06:45 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:06:45.635505 | orchestrator | 2025-11-01 13:06:45 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:06:45.635528 | orchestrator | 2025-11-01 13:06:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:06:48.691344 | orchestrator | 2025-11-01 13:06:48 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:06:48.695421 | orchestrator | 2025-11-01 13:06:48 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:06:48.697984 | orchestrator | 2025-11-01 13:06:48 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:06:48.698227 | orchestrator | 2025-11-01 13:06:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:06:51.748413 | orchestrator | 2025-11-01 13:06:51 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:06:51.749138 | orchestrator | 2025-11-01 13:06:51 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:06:51.750423 | orchestrator | 2025-11-01 13:06:51 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:06:51.750445 | orchestrator | 2025-11-01 13:06:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:06:54.790763 | orchestrator | 2025-11-01 13:06:54 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:06:54.792497 | orchestrator | 2025-11-01 13:06:54 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:06:54.794664 | orchestrator | 2025-11-01 13:06:54 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:06:54.794874 | orchestrator | 2025-11-01 13:06:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:06:57.842695 | orchestrator | 2025-11-01 13:06:57 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:06:57.843356 | orchestrator | 2025-11-01 13:06:57 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:06:57.845922 | orchestrator | 2025-11-01 13:06:57 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:06:57.846449 | orchestrator | 2025-11-01 13:06:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:07:00.887074 | orchestrator | 2025-11-01 13:07:00 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:07:00.890325 | orchestrator | 2025-11-01 13:07:00 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:07:00.893655 | orchestrator | 2025-11-01 13:07:00 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:07:00.893912 | orchestrator | 2025-11-01 13:07:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:07:03.930742 | orchestrator | 2025-11-01 13:07:03 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:07:03.931551 | orchestrator | 2025-11-01 13:07:03 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:07:03.932486 | orchestrator | 2025-11-01 13:07:03 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:07:03.932508 | orchestrator | 2025-11-01 13:07:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:07:06.986166 | orchestrator | 2025-11-01 13:07:06 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:07:06.987191 | orchestrator | 2025-11-01 13:07:06 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:07:06.988404 | orchestrator | 2025-11-01 13:07:06 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:07:06.988428 | orchestrator | 2025-11-01 13:07:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:07:10.032023 | orchestrator | 2025-11-01 13:07:10 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:07:10.035498 | orchestrator | 2025-11-01 13:07:10 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:07:10.036998 | orchestrator | 2025-11-01 13:07:10 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:07:10.037021 | orchestrator | 2025-11-01 13:07:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:07:13.074385 | orchestrator | 2025-11-01 13:07:13 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:07:13.074905 | orchestrator | 2025-11-01 13:07:13 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:07:13.076145 | orchestrator | 2025-11-01 13:07:13 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:07:13.076166 | orchestrator | 2025-11-01 13:07:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:07:16.121928 | orchestrator | 2025-11-01 13:07:16 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:07:16.125162 | orchestrator | 2025-11-01 13:07:16 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:07:16.127304 | orchestrator | 2025-11-01 13:07:16 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:07:16.127341 | orchestrator | 2025-11-01 13:07:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:07:19.172570 | orchestrator | 2025-11-01 13:07:19 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:07:19.174843 | orchestrator | 2025-11-01 13:07:19 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:07:19.176619 | orchestrator | 2025-11-01 13:07:19 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:07:19.176651 | orchestrator | 2025-11-01 13:07:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:07:22.223976 | orchestrator | 2025-11-01 13:07:22 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:07:22.225367 | orchestrator | 2025-11-01 13:07:22 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:07:22.227619 | orchestrator | 2025-11-01 13:07:22 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:07:22.227643 | orchestrator | 2025-11-01 13:07:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:07:25.270004 | orchestrator | 2025-11-01 13:07:25 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:07:25.271383 | orchestrator | 2025-11-01 13:07:25 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:07:25.272794 | orchestrator | 2025-11-01 13:07:25 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:07:25.272819 | orchestrator | 2025-11-01 13:07:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:07:28.321316 | orchestrator | 2025-11-01 13:07:28 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:07:28.323336 | orchestrator | 2025-11-01 13:07:28 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:07:28.325176 | orchestrator | 2025-11-01 13:07:28 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:07:28.325227 | orchestrator | 2025-11-01 13:07:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:07:31.378401 | orchestrator | 2025-11-01 13:07:31 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:07:31.380993 | orchestrator | 2025-11-01 13:07:31 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:07:31.383647 | orchestrator | 2025-11-01 13:07:31 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:07:31.383786 | orchestrator | 2025-11-01 13:07:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:07:34.423514 | orchestrator | 2025-11-01 13:07:34 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:07:34.427451 | orchestrator | 2025-11-01 13:07:34 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:07:34.430513 | orchestrator | 2025-11-01 13:07:34 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:07:34.430536 | orchestrator | 2025-11-01 13:07:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:07:37.473312 | orchestrator | 2025-11-01 13:07:37 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:07:37.474946 | orchestrator | 2025-11-01 13:07:37 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:07:37.477693 | orchestrator | 2025-11-01 13:07:37 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:07:37.477811 | orchestrator | 2025-11-01 13:07:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:07:40.518303 | orchestrator | 2025-11-01 13:07:40 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:07:40.520306 | orchestrator | 2025-11-01 13:07:40 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:07:40.522479 | orchestrator | 2025-11-01 13:07:40 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:07:40.522502 | orchestrator | 2025-11-01 13:07:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:07:43.559062 | orchestrator | 2025-11-01 13:07:43 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:07:43.560972 | orchestrator | 2025-11-01 13:07:43 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:07:43.563390 | orchestrator | 2025-11-01 13:07:43 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:07:43.563411 | orchestrator | 2025-11-01 13:07:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:07:46.598797 | orchestrator | 2025-11-01 13:07:46 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:07:46.599243 | orchestrator | 2025-11-01 13:07:46 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:07:46.600854 | orchestrator | 2025-11-01 13:07:46 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state STARTED 2025-11-01 13:07:46.600902 | orchestrator | 2025-11-01 13:07:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:07:49.644178 | orchestrator | 2025-11-01 13:07:49 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:07:49.646163 | orchestrator | 2025-11-01 13:07:49 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:07:49.647683 | orchestrator | 2025-11-01 13:07:49 | INFO  | Task 59503531-b050-4c01-8687-28fa0155666f is in state SUCCESS 2025-11-01 13:07:49.649525 | orchestrator | 2025-11-01 13:07:49.649548 | orchestrator | 2025-11-01 13:07:49.649560 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:07:49.649573 | orchestrator | 2025-11-01 13:07:49.649584 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:07:49.649596 | orchestrator | Saturday 01 November 2025 13:04:39 +0000 (0:00:00.342) 0:00:00.342 ***** 2025-11-01 13:07:49.649607 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:07:49.649619 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:07:49.649631 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:07:49.649641 | orchestrator | 2025-11-01 13:07:49.649653 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:07:49.649664 | orchestrator | Saturday 01 November 2025 13:04:39 +0000 (0:00:00.383) 0:00:00.725 ***** 2025-11-01 13:07:49.649676 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-11-01 13:07:49.649688 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-11-01 13:07:49.649699 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-11-01 13:07:49.649710 | orchestrator | 2025-11-01 13:07:49.649721 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-11-01 13:07:49.649732 | orchestrator | 2025-11-01 13:07:49.649761 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-11-01 13:07:49.649773 | orchestrator | Saturday 01 November 2025 13:04:40 +0000 (0:00:00.461) 0:00:01.187 ***** 2025-11-01 13:07:49.649784 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:07:49.649795 | orchestrator | 2025-11-01 13:07:49.649806 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-11-01 13:07:49.649817 | orchestrator | Saturday 01 November 2025 13:04:40 +0000 (0:00:00.566) 0:00:01.753 ***** 2025-11-01 13:07:49.649828 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-01 13:07:49.649839 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-01 13:07:49.649850 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-11-01 13:07:49.649861 | orchestrator | 2025-11-01 13:07:49.649872 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-11-01 13:07:49.649882 | orchestrator | Saturday 01 November 2025 13:04:41 +0000 (0:00:00.806) 0:00:02.560 ***** 2025-11-01 13:07:49.649897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 13:07:49.649933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 13:07:49.649955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 13:07:49.649976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 13:07:49.649991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 13:07:49.650004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 13:07:49.650083 | orchestrator | 2025-11-01 13:07:49.650097 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-11-01 13:07:49.650109 | orchestrator | Saturday 01 November 2025 13:04:43 +0000 (0:00:01.888) 0:00:04.448 ***** 2025-11-01 13:07:49.650120 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:07:49.650131 | orchestrator | 2025-11-01 13:07:49.650144 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-11-01 13:07:49.650157 | orchestrator | Saturday 01 November 2025 13:04:44 +0000 (0:00:00.571) 0:00:05.020 ***** 2025-11-01 13:07:49.650181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 13:07:49.650227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 13:07:49.650242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 13:07:49.650265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 13:07:49.650286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 13:07:49.650306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 13:07:49.650321 | orchestrator | 2025-11-01 13:07:49.650333 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-11-01 13:07:49.650346 | orchestrator | Saturday 01 November 2025 13:04:46 +0000 (0:00:02.886) 0:00:07.906 ***** 2025-11-01 13:07:49.650360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-01 13:07:49.650383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-01 13:07:49.650397 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:07:49.650410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-01 13:07:49.650437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-01 13:07:49.650452 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:07:49.650465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-01 13:07:49.650487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-01 13:07:49.650499 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:07:49.650510 | orchestrator | 2025-11-01 13:07:49.650521 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-11-01 13:07:49.650532 | orchestrator | Saturday 01 November 2025 13:04:48 +0000 (0:00:01.418) 0:00:09.324 ***** 2025-11-01 13:07:49.650543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-01 13:07:49.650562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-01 13:07:49.650579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-01 13:07:49.650599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-01 13:07:49.650611 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:07:49.650622 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:07:49.650634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-11-01 13:07:49.650653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-11-01 13:07:49.650665 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:07:49.650676 | orchestrator | 2025-11-01 13:07:49.650687 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-11-01 13:07:49.650702 | orchestrator | Saturday 01 November 2025 13:04:49 +0000 (0:00:01.328) 0:00:10.653 ***** 2025-11-01 13:07:49.650714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 13:07:49.650733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 13:07:49.650744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 13:07:49.650763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 13:07:49.650785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 13:07:49.650805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 13:07:49.650816 | orchestrator | 2025-11-01 13:07:49.650827 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-11-01 13:07:49.650838 | orchestrator | Saturday 01 November 2025 13:04:52 +0000 (0:00:02.471) 0:00:13.125 ***** 2025-11-01 13:07:49.650849 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:07:49.650860 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:07:49.650871 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:07:49.650882 | orchestrator | 2025-11-01 13:07:49.650893 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-11-01 13:07:49.650904 | orchestrator | Saturday 01 November 2025 13:04:55 +0000 (0:00:03.460) 0:00:16.585 ***** 2025-11-01 13:07:49.650915 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:07:49.650925 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:07:49.650936 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:07:49.650947 | orchestrator | 2025-11-01 13:07:49.650957 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-11-01 13:07:49.650968 | orchestrator | Saturday 01 November 2025 13:04:57 +0000 (0:00:02.330) 0:00:18.916 ***** 2025-11-01 13:07:49.650980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 13:07:49.650998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 13:07:49.651028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-11-01 13:07:49.651040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 13:07:49.651053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 13:07:49.651072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-11-01 13:07:49.651092 | orchestrator | 2025-11-01 13:07:49.651103 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-11-01 13:07:49.651114 | orchestrator | Saturday 01 November 2025 13:05:00 +0000 (0:00:02.739) 0:00:21.655 ***** 2025-11-01 13:07:49.651130 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:07:49.651142 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:07:49.651153 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:07:49.651164 | orchestrator | 2025-11-01 13:07:49.651174 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-11-01 13:07:49.651185 | orchestrator | Saturday 01 November 2025 13:05:01 +0000 (0:00:00.340) 0:00:21.996 ***** 2025-11-01 13:07:49.651196 | orchestrator | 2025-11-01 13:07:49.651224 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-11-01 13:07:49.651235 | orchestrator | Saturday 01 November 2025 13:05:01 +0000 (0:00:00.079) 0:00:22.075 ***** 2025-11-01 13:07:49.651246 | orchestrator | 2025-11-01 13:07:49.651256 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-11-01 13:07:49.651267 | orchestrator | Saturday 01 November 2025 13:05:01 +0000 (0:00:00.080) 0:00:22.156 ***** 2025-11-01 13:07:49.651278 | orchestrator | 2025-11-01 13:07:49.651289 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-11-01 13:07:49.651300 | orchestrator | Saturday 01 November 2025 13:05:01 +0000 (0:00:00.091) 0:00:22.248 ***** 2025-11-01 13:07:49.651311 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:07:49.651322 | orchestrator | 2025-11-01 13:07:49.651333 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-11-01 13:07:49.651344 | orchestrator | Saturday 01 November 2025 13:05:01 +0000 (0:00:00.271) 0:00:22.519 ***** 2025-11-01 13:07:49.651355 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:07:49.651366 | orchestrator | 2025-11-01 13:07:49.651377 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-11-01 13:07:49.651388 | orchestrator | Saturday 01 November 2025 13:05:02 +0000 (0:00:01.111) 0:00:23.631 ***** 2025-11-01 13:07:49.651399 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:07:49.651410 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:07:49.651421 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:07:49.651432 | orchestrator | 2025-11-01 13:07:49.651443 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-11-01 13:07:49.651454 | orchestrator | Saturday 01 November 2025 13:06:07 +0000 (0:01:04.926) 0:01:28.558 ***** 2025-11-01 13:07:49.651465 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:07:49.651476 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:07:49.651487 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:07:49.651498 | orchestrator | 2025-11-01 13:07:49.651508 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-11-01 13:07:49.651519 | orchestrator | Saturday 01 November 2025 13:07:35 +0000 (0:01:28.139) 0:02:56.697 ***** 2025-11-01 13:07:49.651530 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:07:49.651541 | orchestrator | 2025-11-01 13:07:49.651552 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-11-01 13:07:49.651563 | orchestrator | Saturday 01 November 2025 13:07:36 +0000 (0:00:00.763) 0:02:57.460 ***** 2025-11-01 13:07:49.651574 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:07:49.651585 | orchestrator | 2025-11-01 13:07:49.651596 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-11-01 13:07:49.651607 | orchestrator | Saturday 01 November 2025 13:07:39 +0000 (0:00:02.768) 0:03:00.228 ***** 2025-11-01 13:07:49.651618 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:07:49.651629 | orchestrator | 2025-11-01 13:07:49.651640 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-11-01 13:07:49.651657 | orchestrator | Saturday 01 November 2025 13:07:41 +0000 (0:00:02.572) 0:03:02.801 ***** 2025-11-01 13:07:49.651668 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:07:49.651679 | orchestrator | 2025-11-01 13:07:49.651690 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-11-01 13:07:49.651701 | orchestrator | Saturday 01 November 2025 13:07:45 +0000 (0:00:03.168) 0:03:05.969 ***** 2025-11-01 13:07:49.651712 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:07:49.651723 | orchestrator | 2025-11-01 13:07:49.651734 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:07:49.651745 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 13:07:49.651758 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-01 13:07:49.651769 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-01 13:07:49.651780 | orchestrator | 2025-11-01 13:07:49.651791 | orchestrator | 2025-11-01 13:07:49.651802 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:07:49.651818 | orchestrator | Saturday 01 November 2025 13:07:47 +0000 (0:00:02.756) 0:03:08.726 ***** 2025-11-01 13:07:49.651830 | orchestrator | =============================================================================== 2025-11-01 13:07:49.651840 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 88.14s 2025-11-01 13:07:49.651851 | orchestrator | opensearch : Restart opensearch container ------------------------------ 64.93s 2025-11-01 13:07:49.651862 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.46s 2025-11-01 13:07:49.651873 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.17s 2025-11-01 13:07:49.651884 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.89s 2025-11-01 13:07:49.651894 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.77s 2025-11-01 13:07:49.651905 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.76s 2025-11-01 13:07:49.651916 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.74s 2025-11-01 13:07:49.651931 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.57s 2025-11-01 13:07:49.651942 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.47s 2025-11-01 13:07:49.651953 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.33s 2025-11-01 13:07:49.651963 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.89s 2025-11-01 13:07:49.651974 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.42s 2025-11-01 13:07:49.651985 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.33s 2025-11-01 13:07:49.651996 | orchestrator | opensearch : Perform a flush -------------------------------------------- 1.11s 2025-11-01 13:07:49.652006 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.81s 2025-11-01 13:07:49.652017 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.76s 2025-11-01 13:07:49.652028 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2025-11-01 13:07:49.652039 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2025-11-01 13:07:49.652049 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2025-11-01 13:07:49.652060 | orchestrator | 2025-11-01 13:07:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:07:52.696861 | orchestrator | 2025-11-01 13:07:52 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:07:52.698791 | orchestrator | 2025-11-01 13:07:52 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:07:52.698822 | orchestrator | 2025-11-01 13:07:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:07:55.737410 | orchestrator | 2025-11-01 13:07:55 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:07:55.738061 | orchestrator | 2025-11-01 13:07:55 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:07:55.738472 | orchestrator | 2025-11-01 13:07:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:07:58.776036 | orchestrator | 2025-11-01 13:07:58 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:07:58.777305 | orchestrator | 2025-11-01 13:07:58 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:07:58.777341 | orchestrator | 2025-11-01 13:07:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:08:01.823904 | orchestrator | 2025-11-01 13:08:01 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:08:01.827184 | orchestrator | 2025-11-01 13:08:01 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:08:01.827261 | orchestrator | 2025-11-01 13:08:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:08:04.864938 | orchestrator | 2025-11-01 13:08:04 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state STARTED 2025-11-01 13:08:04.865756 | orchestrator | 2025-11-01 13:08:04 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:08:04.865789 | orchestrator | 2025-11-01 13:08:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:08:07.912161 | orchestrator | 2025-11-01 13:08:07 | INFO  | Task fa2b1815-0da2-45ca-affb-f2f8d8610e2e is in state SUCCESS 2025-11-01 13:08:07.912293 | orchestrator | 2025-11-01 13:08:07.914627 | orchestrator | 2025-11-01 13:08:07.914687 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-11-01 13:08:07.914710 | orchestrator | 2025-11-01 13:08:07.914726 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-11-01 13:08:07.914744 | orchestrator | Saturday 01 November 2025 13:04:39 +0000 (0:00:00.101) 0:00:00.101 ***** 2025-11-01 13:08:07.914761 | orchestrator | ok: [localhost] => { 2025-11-01 13:08:07.914779 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-11-01 13:08:07.914790 | orchestrator | } 2025-11-01 13:08:07.914801 | orchestrator | 2025-11-01 13:08:07.914811 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-11-01 13:08:07.914821 | orchestrator | Saturday 01 November 2025 13:04:39 +0000 (0:00:00.052) 0:00:00.154 ***** 2025-11-01 13:08:07.914831 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-11-01 13:08:07.914842 | orchestrator | ...ignoring 2025-11-01 13:08:07.914852 | orchestrator | 2025-11-01 13:08:07.914862 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-11-01 13:08:07.914872 | orchestrator | Saturday 01 November 2025 13:04:42 +0000 (0:00:02.983) 0:00:03.137 ***** 2025-11-01 13:08:07.914882 | orchestrator | skipping: [localhost] 2025-11-01 13:08:07.914891 | orchestrator | 2025-11-01 13:08:07.914901 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-11-01 13:08:07.914911 | orchestrator | Saturday 01 November 2025 13:04:42 +0000 (0:00:00.067) 0:00:03.204 ***** 2025-11-01 13:08:07.914920 | orchestrator | ok: [localhost] 2025-11-01 13:08:07.914930 | orchestrator | 2025-11-01 13:08:07.914940 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:08:07.914949 | orchestrator | 2025-11-01 13:08:07.914976 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:08:07.915007 | orchestrator | Saturday 01 November 2025 13:04:42 +0000 (0:00:00.165) 0:00:03.370 ***** 2025-11-01 13:08:07.915018 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:08:07.915028 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:08:07.915037 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:08:07.915047 | orchestrator | 2025-11-01 13:08:07.915057 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:08:07.915066 | orchestrator | Saturday 01 November 2025 13:04:42 +0000 (0:00:00.316) 0:00:03.686 ***** 2025-11-01 13:08:07.915076 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-11-01 13:08:07.915086 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-11-01 13:08:07.915096 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-11-01 13:08:07.915105 | orchestrator | 2025-11-01 13:08:07.915115 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-11-01 13:08:07.915124 | orchestrator | 2025-11-01 13:08:07.915134 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-11-01 13:08:07.915144 | orchestrator | Saturday 01 November 2025 13:04:43 +0000 (0:00:00.682) 0:00:04.368 ***** 2025-11-01 13:08:07.915153 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-01 13:08:07.915163 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-11-01 13:08:07.915172 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-11-01 13:08:07.915182 | orchestrator | 2025-11-01 13:08:07.915193 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-11-01 13:08:07.915229 | orchestrator | Saturday 01 November 2025 13:04:43 +0000 (0:00:00.453) 0:00:04.821 ***** 2025-11-01 13:08:07.915241 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:08:07.915252 | orchestrator | 2025-11-01 13:08:07.915263 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-11-01 13:08:07.915274 | orchestrator | Saturday 01 November 2025 13:04:44 +0000 (0:00:00.654) 0:00:05.476 ***** 2025-11-01 13:08:07.915307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-01 13:08:07.915331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-01 13:08:07.915354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-01 13:08:07.915368 | orchestrator | 2025-11-01 13:08:07.915385 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-11-01 13:08:07.915397 | orchestrator | Saturday 01 November 2025 13:04:47 +0000 (0:00:03.213) 0:00:08.689 ***** 2025-11-01 13:08:07.915410 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:08:07.915423 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:08:07.915434 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:08:07.915445 | orchestrator | 2025-11-01 13:08:07.915462 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-11-01 13:08:07.915474 | orchestrator | Saturday 01 November 2025 13:04:48 +0000 (0:00:00.760) 0:00:09.450 ***** 2025-11-01 13:08:07.915485 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:08:07.915496 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:08:07.915507 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:08:07.915519 | orchestrator | 2025-11-01 13:08:07.915530 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-11-01 13:08:07.915542 | orchestrator | Saturday 01 November 2025 13:04:50 +0000 (0:00:01.854) 0:00:11.305 ***** 2025-11-01 13:08:07.915558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-01 13:08:07.915576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-01 13:08:07.915712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-01 13:08:07.915728 | orchestrator | 2025-11-01 13:08:07.915738 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-11-01 13:08:07.915748 | orchestrator | Saturday 01 November 2025 13:04:54 +0000 (0:00:04.264) 0:00:15.569 ***** 2025-11-01 13:08:07.915758 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:08:07.915768 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:08:07.915778 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:08:07.915787 | orchestrator | 2025-11-01 13:08:07.915797 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-11-01 13:08:07.915807 | orchestrator | Saturday 01 November 2025 13:04:55 +0000 (0:00:01.208) 0:00:16.778 ***** 2025-11-01 13:08:07.915816 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:08:07.915826 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:08:07.915836 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:08:07.915845 | orchestrator | 2025-11-01 13:08:07.915855 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-11-01 13:08:07.915865 | orchestrator | Saturday 01 November 2025 13:05:01 +0000 (0:00:05.401) 0:00:22.180 ***** 2025-11-01 13:08:07.915875 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:08:07.915884 | orchestrator | 2025-11-01 13:08:07.915894 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-11-01 13:08:07.915904 | orchestrator | Saturday 01 November 2025 13:05:01 +0000 (0:00:00.602) 0:00:22.782 ***** 2025-11-01 13:08:07.915923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 13:08:07.915942 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:08:07.915958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 13:08:07.915969 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:08:07.915987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 13:08:07.916004 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:08:07.916014 | orchestrator | 2025-11-01 13:08:07.916024 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-11-01 13:08:07.916050 | orchestrator | Saturday 01 November 2025 13:05:06 +0000 (0:00:04.176) 0:00:26.959 ***** 2025-11-01 13:08:07.916065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 13:08:07.916076 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:08:07.916092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 13:08:07.916109 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:08:07.916124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 13:08:07.916135 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:08:07.916145 | orchestrator | 2025-11-01 13:08:07.916154 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-11-01 13:08:07.916164 | orchestrator | Saturday 01 November 2025 13:05:10 +0000 (0:00:04.602) 0:00:31.561 ***** 2025-11-01 13:08:07.916174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 13:08:07.916190 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:08:07.916230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 13:08:07.916242 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:08:07.916253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-11-01 13:08:07.916269 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:08:07.916279 | orchestrator | 2025-11-01 13:08:07.916289 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-11-01 13:08:07.916299 | orchestrator | Saturday 01 November 2025 13:05:14 +0000 (0:00:03.880) 0:00:35.442 ***** 2025-11-01 13:08:07.916324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-01 13:08:07.916345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-01 13:08:07.916386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-11-01 13:08:07.916404 | orchestrator | 2025-11-01 13:08:07.916421 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-11-01 13:08:07.916439 | orchestrator | Saturday 01 November 2025 13:05:18 +0000 (0:00:04.296) 0:00:39.739 ***** 2025-11-01 13:08:07.916457 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:08:07.916481 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:08:07.916494 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:08:07.916505 | orchestrator | 2025-11-01 13:08:07.916516 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-11-01 13:08:07.916528 | orchestrator | Saturday 01 November 2025 13:05:19 +0000 (0:00:00.854) 0:00:40.593 ***** 2025-11-01 13:08:07.916539 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:08:07.916551 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:08:07.916562 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:08:07.916573 | orchestrator | 2025-11-01 13:08:07.916584 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-11-01 13:08:07.916594 | orchestrator | Saturday 01 November 2025 13:05:20 +0000 (0:00:00.480) 0:00:41.074 ***** 2025-11-01 13:08:07.916606 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:08:07.916617 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:08:07.916628 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:08:07.916639 | orchestrator | 2025-11-01 13:08:07.916651 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-11-01 13:08:07.916662 | orchestrator | Saturday 01 November 2025 13:05:20 +0000 (0:00:00.358) 0:00:41.432 ***** 2025-11-01 13:08:07.916681 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-11-01 13:08:07.916692 | orchestrator | ...ignoring 2025-11-01 13:08:07.916701 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-11-01 13:08:07.916711 | orchestrator | ...ignoring 2025-11-01 13:08:07.916721 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-11-01 13:08:07.916731 | orchestrator | ...ignoring 2025-11-01 13:08:07.916740 | orchestrator | 2025-11-01 13:08:07.916750 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-11-01 13:08:07.916759 | orchestrator | Saturday 01 November 2025 13:05:31 +0000 (0:00:10.771) 0:00:52.204 ***** 2025-11-01 13:08:07.916769 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:08:07.916778 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:08:07.916788 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:08:07.916797 | orchestrator | 2025-11-01 13:08:07.916807 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-11-01 13:08:07.916816 | orchestrator | Saturday 01 November 2025 13:05:31 +0000 (0:00:00.502) 0:00:52.707 ***** 2025-11-01 13:08:07.916826 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:08:07.916835 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:08:07.916845 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:08:07.916854 | orchestrator | 2025-11-01 13:08:07.916864 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-11-01 13:08:07.916873 | orchestrator | Saturday 01 November 2025 13:05:32 +0000 (0:00:00.751) 0:00:53.458 ***** 2025-11-01 13:08:07.916883 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:08:07.916892 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:08:07.916902 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:08:07.916911 | orchestrator | 2025-11-01 13:08:07.916921 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-11-01 13:08:07.916930 | orchestrator | Saturday 01 November 2025 13:05:33 +0000 (0:00:00.568) 0:00:54.027 ***** 2025-11-01 13:08:07.916940 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:08:07.916949 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:08:07.916959 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:08:07.916968 | orchestrator | 2025-11-01 13:08:07.916978 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-11-01 13:08:07.916988 | orchestrator | Saturday 01 November 2025 13:05:33 +0000 (0:00:00.507) 0:00:54.534 ***** 2025-11-01 13:08:07.916997 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:08:07.917007 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:08:07.917016 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:08:07.917026 | orchestrator | 2025-11-01 13:08:07.917035 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-11-01 13:08:07.917045 | orchestrator | Saturday 01 November 2025 13:05:34 +0000 (0:00:00.454) 0:00:54.989 ***** 2025-11-01 13:08:07.917060 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:08:07.917070 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:08:07.917079 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:08:07.917089 | orchestrator | 2025-11-01 13:08:07.917099 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-11-01 13:08:07.917108 | orchestrator | Saturday 01 November 2025 13:05:34 +0000 (0:00:00.729) 0:00:55.719 ***** 2025-11-01 13:08:07.917118 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:08:07.917127 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:08:07.917137 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-11-01 13:08:07.917147 | orchestrator | 2025-11-01 13:08:07.917156 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-11-01 13:08:07.917172 | orchestrator | Saturday 01 November 2025 13:05:35 +0000 (0:00:00.426) 0:00:56.145 ***** 2025-11-01 13:08:07.917182 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:08:07.917192 | orchestrator | 2025-11-01 13:08:07.917201 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-11-01 13:08:07.917247 | orchestrator | Saturday 01 November 2025 13:05:46 +0000 (0:00:10.982) 0:01:07.127 ***** 2025-11-01 13:08:07.917257 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:08:07.917267 | orchestrator | 2025-11-01 13:08:07.917277 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-11-01 13:08:07.917286 | orchestrator | Saturday 01 November 2025 13:05:46 +0000 (0:00:00.160) 0:01:07.288 ***** 2025-11-01 13:08:07.917296 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:08:07.917305 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:08:07.917318 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:08:07.917335 | orchestrator | 2025-11-01 13:08:07.917352 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-11-01 13:08:07.917369 | orchestrator | Saturday 01 November 2025 13:05:47 +0000 (0:00:01.120) 0:01:08.409 ***** 2025-11-01 13:08:07.917397 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:08:07.917415 | orchestrator | 2025-11-01 13:08:07.917433 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-11-01 13:08:07.917457 | orchestrator | Saturday 01 November 2025 13:05:56 +0000 (0:00:08.939) 0:01:17.349 ***** 2025-11-01 13:08:07.917477 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:08:07.917493 | orchestrator | 2025-11-01 13:08:07.917511 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-11-01 13:08:07.917528 | orchestrator | Saturday 01 November 2025 13:05:58 +0000 (0:00:01.661) 0:01:19.011 ***** 2025-11-01 13:08:07.917544 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:08:07.917561 | orchestrator | 2025-11-01 13:08:07.917572 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-11-01 13:08:07.917581 | orchestrator | Saturday 01 November 2025 13:06:00 +0000 (0:00:02.854) 0:01:21.865 ***** 2025-11-01 13:08:07.917591 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:08:07.917601 | orchestrator | 2025-11-01 13:08:07.917610 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-11-01 13:08:07.917620 | orchestrator | Saturday 01 November 2025 13:06:01 +0000 (0:00:00.122) 0:01:21.988 ***** 2025-11-01 13:08:07.917629 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:08:07.917639 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:08:07.917649 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:08:07.917658 | orchestrator | 2025-11-01 13:08:07.917668 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-11-01 13:08:07.917678 | orchestrator | Saturday 01 November 2025 13:06:01 +0000 (0:00:00.351) 0:01:22.340 ***** 2025-11-01 13:08:07.917687 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:08:07.917697 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-11-01 13:08:07.917707 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:08:07.917716 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:08:07.917726 | orchestrator | 2025-11-01 13:08:07.917735 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-11-01 13:08:07.917745 | orchestrator | skipping: no hosts matched 2025-11-01 13:08:07.917754 | orchestrator | 2025-11-01 13:08:07.917764 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-11-01 13:08:07.917774 | orchestrator | 2025-11-01 13:08:07.917783 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-11-01 13:08:07.917793 | orchestrator | Saturday 01 November 2025 13:06:02 +0000 (0:00:00.618) 0:01:22.958 ***** 2025-11-01 13:08:07.917802 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:08:07.917812 | orchestrator | 2025-11-01 13:08:07.917821 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-11-01 13:08:07.917831 | orchestrator | Saturday 01 November 2025 13:06:21 +0000 (0:00:19.353) 0:01:42.311 ***** 2025-11-01 13:08:07.917849 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:08:07.917859 | orchestrator | 2025-11-01 13:08:07.917868 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-11-01 13:08:07.917878 | orchestrator | Saturday 01 November 2025 13:06:42 +0000 (0:00:20.656) 0:02:02.967 ***** 2025-11-01 13:08:07.917888 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:08:07.917897 | orchestrator | 2025-11-01 13:08:07.917907 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-11-01 13:08:07.917916 | orchestrator | 2025-11-01 13:08:07.917926 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-11-01 13:08:07.917935 | orchestrator | Saturday 01 November 2025 13:06:44 +0000 (0:00:02.779) 0:02:05.747 ***** 2025-11-01 13:08:07.917945 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:08:07.917954 | orchestrator | 2025-11-01 13:08:07.917964 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-11-01 13:08:07.917974 | orchestrator | Saturday 01 November 2025 13:07:05 +0000 (0:00:20.413) 0:02:26.160 ***** 2025-11-01 13:08:07.917983 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:08:07.917993 | orchestrator | 2025-11-01 13:08:07.918002 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-11-01 13:08:07.918012 | orchestrator | Saturday 01 November 2025 13:07:26 +0000 (0:00:21.577) 0:02:47.737 ***** 2025-11-01 13:08:07.918073 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:08:07.918083 | orchestrator | 2025-11-01 13:08:07.918093 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-11-01 13:08:07.918103 | orchestrator | 2025-11-01 13:08:07.918121 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-11-01 13:08:07.918131 | orchestrator | Saturday 01 November 2025 13:07:29 +0000 (0:00:02.821) 0:02:50.559 ***** 2025-11-01 13:08:07.918141 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:08:07.918151 | orchestrator | 2025-11-01 13:08:07.918160 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-11-01 13:08:07.918170 | orchestrator | Saturday 01 November 2025 13:07:43 +0000 (0:00:13.766) 0:03:04.325 ***** 2025-11-01 13:08:07.918179 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:08:07.918189 | orchestrator | 2025-11-01 13:08:07.918199 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-11-01 13:08:07.918234 | orchestrator | Saturday 01 November 2025 13:07:49 +0000 (0:00:05.605) 0:03:09.930 ***** 2025-11-01 13:08:07.918246 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:08:07.918255 | orchestrator | 2025-11-01 13:08:07.918265 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-11-01 13:08:07.918275 | orchestrator | 2025-11-01 13:08:07.918285 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-11-01 13:08:07.918295 | orchestrator | Saturday 01 November 2025 13:07:52 +0000 (0:00:03.250) 0:03:13.180 ***** 2025-11-01 13:08:07.918304 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:08:07.918314 | orchestrator | 2025-11-01 13:08:07.918324 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-11-01 13:08:07.918333 | orchestrator | Saturday 01 November 2025 13:07:52 +0000 (0:00:00.604) 0:03:13.785 ***** 2025-11-01 13:08:07.918343 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:08:07.918353 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:08:07.918362 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:08:07.918374 | orchestrator | 2025-11-01 13:08:07.918391 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-11-01 13:08:07.918413 | orchestrator | Saturday 01 November 2025 13:07:55 +0000 (0:00:02.667) 0:03:16.453 ***** 2025-11-01 13:08:07.918429 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:08:07.918445 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:08:07.918461 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:08:07.918476 | orchestrator | 2025-11-01 13:08:07.918493 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-11-01 13:08:07.918519 | orchestrator | Saturday 01 November 2025 13:07:58 +0000 (0:00:02.707) 0:03:19.160 ***** 2025-11-01 13:08:07.918536 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:08:07.918552 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:08:07.918568 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:08:07.918584 | orchestrator | 2025-11-01 13:08:07.918601 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-11-01 13:08:07.918616 | orchestrator | Saturday 01 November 2025 13:08:00 +0000 (0:00:02.657) 0:03:21.818 ***** 2025-11-01 13:08:07.918630 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:08:07.918644 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:08:07.918658 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:08:07.918673 | orchestrator | 2025-11-01 13:08:07.918689 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-11-01 13:08:07.918705 | orchestrator | Saturday 01 November 2025 13:08:03 +0000 (0:00:02.543) 0:03:24.362 ***** 2025-11-01 13:08:07.918722 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:08:07.918738 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:08:07.918755 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:08:07.918771 | orchestrator | 2025-11-01 13:08:07.918785 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-11-01 13:08:07.918795 | orchestrator | Saturday 01 November 2025 13:08:06 +0000 (0:00:03.404) 0:03:27.766 ***** 2025-11-01 13:08:07.918804 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:08:07.918814 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:08:07.918824 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:08:07.918833 | orchestrator | 2025-11-01 13:08:07.918842 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:08:07.918852 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-11-01 13:08:07.918863 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-11-01 13:08:07.918875 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-11-01 13:08:07.918884 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-11-01 13:08:07.918894 | orchestrator | 2025-11-01 13:08:07.918904 | orchestrator | 2025-11-01 13:08:07.918913 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:08:07.918923 | orchestrator | Saturday 01 November 2025 13:08:07 +0000 (0:00:00.252) 0:03:28.018 ***** 2025-11-01 13:08:07.918932 | orchestrator | =============================================================================== 2025-11-01 13:08:07.918942 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 42.23s 2025-11-01 13:08:07.918952 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 39.77s 2025-11-01 13:08:07.918961 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 13.77s 2025-11-01 13:08:07.918971 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.98s 2025-11-01 13:08:07.918980 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.77s 2025-11-01 13:08:07.918990 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.94s 2025-11-01 13:08:07.919008 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.61s 2025-11-01 13:08:07.919018 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.60s 2025-11-01 13:08:07.919028 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.40s 2025-11-01 13:08:07.919037 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 4.60s 2025-11-01 13:08:07.919047 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.30s 2025-11-01 13:08:07.919065 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.26s 2025-11-01 13:08:07.919075 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.18s 2025-11-01 13:08:07.919084 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.88s 2025-11-01 13:08:07.919094 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.40s 2025-11-01 13:08:07.919103 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 3.25s 2025-11-01 13:08:07.919113 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.21s 2025-11-01 13:08:07.919122 | orchestrator | Check MariaDB service --------------------------------------------------- 2.98s 2025-11-01 13:08:07.919132 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.85s 2025-11-01 13:08:07.919141 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.71s 2025-11-01 13:08:07.919151 | orchestrator | 2025-11-01 13:08:07 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:08:07.919166 | orchestrator | 2025-11-01 13:08:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:08:10.962843 | orchestrator | 2025-11-01 13:08:10 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:08:10.968371 | orchestrator | 2025-11-01 13:08:10 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:08:10.970330 | orchestrator | 2025-11-01 13:08:10 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:08:10.970357 | orchestrator | 2025-11-01 13:08:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:08:14.019196 | orchestrator | 2025-11-01 13:08:14 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:08:14.020024 | orchestrator | 2025-11-01 13:08:14 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:08:14.021143 | orchestrator | 2025-11-01 13:08:14 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:08:14.021167 | orchestrator | 2025-11-01 13:08:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:08:17.051096 | orchestrator | 2025-11-01 13:08:17 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:08:17.052429 | orchestrator | 2025-11-01 13:08:17 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:08:17.054000 | orchestrator | 2025-11-01 13:08:17 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:08:17.054136 | orchestrator | 2025-11-01 13:08:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:08:20.093385 | orchestrator | 2025-11-01 13:08:20 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:08:20.094311 | orchestrator | 2025-11-01 13:08:20 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:08:20.095615 | orchestrator | 2025-11-01 13:08:20 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:08:20.095799 | orchestrator | 2025-11-01 13:08:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:08:23.143739 | orchestrator | 2025-11-01 13:08:23 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:08:23.145414 | orchestrator | 2025-11-01 13:08:23 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:08:23.146595 | orchestrator | 2025-11-01 13:08:23 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:08:23.146618 | orchestrator | 2025-11-01 13:08:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:08:26.181531 | orchestrator | 2025-11-01 13:08:26 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:08:26.184336 | orchestrator | 2025-11-01 13:08:26 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:08:26.185644 | orchestrator | 2025-11-01 13:08:26 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:08:26.185674 | orchestrator | 2025-11-01 13:08:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:08:29.230083 | orchestrator | 2025-11-01 13:08:29 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:08:29.232636 | orchestrator | 2025-11-01 13:08:29 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:08:29.233674 | orchestrator | 2025-11-01 13:08:29 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:08:29.233696 | orchestrator | 2025-11-01 13:08:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:08:32.268666 | orchestrator | 2025-11-01 13:08:32 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:08:32.269455 | orchestrator | 2025-11-01 13:08:32 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:08:32.270606 | orchestrator | 2025-11-01 13:08:32 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:08:32.270641 | orchestrator | 2025-11-01 13:08:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:08:35.307957 | orchestrator | 2025-11-01 13:08:35 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:08:35.308048 | orchestrator | 2025-11-01 13:08:35 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:08:35.309612 | orchestrator | 2025-11-01 13:08:35 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:08:35.309989 | orchestrator | 2025-11-01 13:08:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:08:38.360122 | orchestrator | 2025-11-01 13:08:38 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:08:38.360264 | orchestrator | 2025-11-01 13:08:38 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:08:38.362412 | orchestrator | 2025-11-01 13:08:38 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:08:38.362442 | orchestrator | 2025-11-01 13:08:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:08:41.407016 | orchestrator | 2025-11-01 13:08:41 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:08:41.409477 | orchestrator | 2025-11-01 13:08:41 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:08:41.411174 | orchestrator | 2025-11-01 13:08:41 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:08:41.411202 | orchestrator | 2025-11-01 13:08:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:08:44.452035 | orchestrator | 2025-11-01 13:08:44 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:08:44.453061 | orchestrator | 2025-11-01 13:08:44 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:08:44.454858 | orchestrator | 2025-11-01 13:08:44 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:08:44.454887 | orchestrator | 2025-11-01 13:08:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:08:47.493871 | orchestrator | 2025-11-01 13:08:47 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:08:47.495658 | orchestrator | 2025-11-01 13:08:47 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:08:47.497506 | orchestrator | 2025-11-01 13:08:47 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:08:47.497516 | orchestrator | 2025-11-01 13:08:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:08:50.547171 | orchestrator | 2025-11-01 13:08:50 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state STARTED 2025-11-01 13:08:50.547306 | orchestrator | 2025-11-01 13:08:50 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:08:50.548793 | orchestrator | 2025-11-01 13:08:50 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:08:50.548819 | orchestrator | 2025-11-01 13:08:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:08:53.592529 | orchestrator | 2025-11-01 13:08:53 | INFO  | Task acb02a68-2c03-40e9-a833-67af72bfab5c is in state SUCCESS 2025-11-01 13:08:53.594446 | orchestrator | 2025-11-01 13:08:53.594487 | orchestrator | 2025-11-01 13:08:53.594500 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-11-01 13:08:53.594580 | orchestrator | 2025-11-01 13:08:53.594594 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-11-01 13:08:53.594606 | orchestrator | Saturday 01 November 2025 13:06:34 +0000 (0:00:00.693) 0:00:00.693 ***** 2025-11-01 13:08:53.594617 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:08:53.594630 | orchestrator | 2025-11-01 13:08:53.594641 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-11-01 13:08:53.594652 | orchestrator | Saturday 01 November 2025 13:06:35 +0000 (0:00:00.782) 0:00:01.476 ***** 2025-11-01 13:08:53.594663 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:08:53.594675 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:08:53.594686 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:08:53.594697 | orchestrator | 2025-11-01 13:08:53.594960 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-11-01 13:08:53.595382 | orchestrator | Saturday 01 November 2025 13:06:35 +0000 (0:00:00.668) 0:00:02.144 ***** 2025-11-01 13:08:53.595401 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:08:53.595413 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:08:53.595424 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:08:53.595435 | orchestrator | 2025-11-01 13:08:53.595446 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-11-01 13:08:53.595457 | orchestrator | Saturday 01 November 2025 13:06:36 +0000 (0:00:00.391) 0:00:02.535 ***** 2025-11-01 13:08:53.595468 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:08:53.595479 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:08:53.595490 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:08:53.595501 | orchestrator | 2025-11-01 13:08:53.595512 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-11-01 13:08:53.595523 | orchestrator | Saturday 01 November 2025 13:06:37 +0000 (0:00:00.880) 0:00:03.416 ***** 2025-11-01 13:08:53.595534 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:08:53.595545 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:08:53.595556 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:08:53.595567 | orchestrator | 2025-11-01 13:08:53.595578 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-11-01 13:08:53.595589 | orchestrator | Saturday 01 November 2025 13:06:37 +0000 (0:00:00.330) 0:00:03.747 ***** 2025-11-01 13:08:53.595600 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:08:53.595610 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:08:53.595621 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:08:53.595632 | orchestrator | 2025-11-01 13:08:53.595661 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-11-01 13:08:53.595696 | orchestrator | Saturday 01 November 2025 13:06:37 +0000 (0:00:00.355) 0:00:04.102 ***** 2025-11-01 13:08:53.595708 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:08:53.595719 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:08:53.595729 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:08:53.595740 | orchestrator | 2025-11-01 13:08:53.595751 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-11-01 13:08:53.595763 | orchestrator | Saturday 01 November 2025 13:06:38 +0000 (0:00:00.367) 0:00:04.469 ***** 2025-11-01 13:08:53.595774 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.595785 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.595796 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:08:53.595807 | orchestrator | 2025-11-01 13:08:53.595818 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-11-01 13:08:53.595829 | orchestrator | Saturday 01 November 2025 13:06:38 +0000 (0:00:00.553) 0:00:05.022 ***** 2025-11-01 13:08:53.595840 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:08:53.595851 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:08:53.595861 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:08:53.595872 | orchestrator | 2025-11-01 13:08:53.595883 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-11-01 13:08:53.595894 | orchestrator | Saturday 01 November 2025 13:06:38 +0000 (0:00:00.296) 0:00:05.319 ***** 2025-11-01 13:08:53.595905 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-01 13:08:53.595916 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-01 13:08:53.595926 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-01 13:08:53.595937 | orchestrator | 2025-11-01 13:08:53.595948 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-11-01 13:08:53.595959 | orchestrator | Saturday 01 November 2025 13:06:39 +0000 (0:00:00.691) 0:00:06.011 ***** 2025-11-01 13:08:53.595970 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:08:53.595980 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:08:53.595991 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:08:53.596002 | orchestrator | 2025-11-01 13:08:53.596013 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-11-01 13:08:53.596024 | orchestrator | Saturday 01 November 2025 13:06:40 +0000 (0:00:00.460) 0:00:06.472 ***** 2025-11-01 13:08:53.596035 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-01 13:08:53.596046 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-01 13:08:53.596057 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-01 13:08:53.596068 | orchestrator | 2025-11-01 13:08:53.596078 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-11-01 13:08:53.596089 | orchestrator | Saturday 01 November 2025 13:06:42 +0000 (0:00:02.272) 0:00:08.744 ***** 2025-11-01 13:08:53.596100 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-01 13:08:53.596111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-01 13:08:53.596122 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-01 13:08:53.596133 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.596144 | orchestrator | 2025-11-01 13:08:53.596155 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-11-01 13:08:53.596208 | orchestrator | Saturday 01 November 2025 13:06:43 +0000 (0:00:00.739) 0:00:09.483 ***** 2025-11-01 13:08:53.596255 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.596271 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.596292 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.596304 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.596315 | orchestrator | 2025-11-01 13:08:53.596325 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-11-01 13:08:53.596336 | orchestrator | Saturday 01 November 2025 13:06:44 +0000 (0:00:00.916) 0:00:10.400 ***** 2025-11-01 13:08:53.596349 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.596369 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.596381 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.596392 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.596403 | orchestrator | 2025-11-01 13:08:53.596414 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-11-01 13:08:53.596425 | orchestrator | Saturday 01 November 2025 13:06:44 +0000 (0:00:00.382) 0:00:10.782 ***** 2025-11-01 13:08:53.596437 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '8021d360b8e1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-11-01 13:06:40.868021', 'end': '2025-11-01 13:06:40.940521', 'delta': '0:00:00.072500', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8021d360b8e1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-11-01 13:08:53.596451 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '85fba22a757f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-11-01 13:06:41.710838', 'end': '2025-11-01 13:06:41.749797', 'delta': '0:00:00.038959', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['85fba22a757f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-11-01 13:08:53.596497 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c592970708c9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-11-01 13:06:42.239034', 'end': '2025-11-01 13:06:42.284466', 'delta': '0:00:00.045432', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c592970708c9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-11-01 13:08:53.596521 | orchestrator | 2025-11-01 13:08:53.596532 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-11-01 13:08:53.596543 | orchestrator | Saturday 01 November 2025 13:06:44 +0000 (0:00:00.250) 0:00:11.032 ***** 2025-11-01 13:08:53.596554 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:08:53.596565 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:08:53.596576 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:08:53.596587 | orchestrator | 2025-11-01 13:08:53.596597 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-11-01 13:08:53.596608 | orchestrator | Saturday 01 November 2025 13:06:45 +0000 (0:00:00.474) 0:00:11.507 ***** 2025-11-01 13:08:53.596619 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-11-01 13:08:53.596630 | orchestrator | 2025-11-01 13:08:53.596641 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-11-01 13:08:53.596651 | orchestrator | Saturday 01 November 2025 13:06:46 +0000 (0:00:01.735) 0:00:13.242 ***** 2025-11-01 13:08:53.596662 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.596673 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.596684 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:08:53.596695 | orchestrator | 2025-11-01 13:08:53.596705 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-11-01 13:08:53.596716 | orchestrator | Saturday 01 November 2025 13:06:47 +0000 (0:00:00.321) 0:00:13.563 ***** 2025-11-01 13:08:53.596726 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.596737 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.596748 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:08:53.596759 | orchestrator | 2025-11-01 13:08:53.596775 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-11-01 13:08:53.596786 | orchestrator | Saturday 01 November 2025 13:06:47 +0000 (0:00:00.492) 0:00:14.056 ***** 2025-11-01 13:08:53.596797 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.596807 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.596818 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:08:53.596829 | orchestrator | 2025-11-01 13:08:53.596840 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-11-01 13:08:53.596850 | orchestrator | Saturday 01 November 2025 13:06:48 +0000 (0:00:00.563) 0:00:14.619 ***** 2025-11-01 13:08:53.596861 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:08:53.596872 | orchestrator | 2025-11-01 13:08:53.596882 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-11-01 13:08:53.596893 | orchestrator | Saturday 01 November 2025 13:06:48 +0000 (0:00:00.131) 0:00:14.751 ***** 2025-11-01 13:08:53.596904 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.596915 | orchestrator | 2025-11-01 13:08:53.596925 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-11-01 13:08:53.596936 | orchestrator | Saturday 01 November 2025 13:06:48 +0000 (0:00:00.263) 0:00:15.014 ***** 2025-11-01 13:08:53.596947 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.596958 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.596968 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:08:53.596979 | orchestrator | 2025-11-01 13:08:53.596990 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-11-01 13:08:53.597000 | orchestrator | Saturday 01 November 2025 13:06:49 +0000 (0:00:00.335) 0:00:15.350 ***** 2025-11-01 13:08:53.597018 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.597029 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.597040 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:08:53.597050 | orchestrator | 2025-11-01 13:08:53.597061 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-11-01 13:08:53.597072 | orchestrator | Saturday 01 November 2025 13:06:49 +0000 (0:00:00.338) 0:00:15.688 ***** 2025-11-01 13:08:53.597083 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.597093 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.597104 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:08:53.597115 | orchestrator | 2025-11-01 13:08:53.597126 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-11-01 13:08:53.597136 | orchestrator | Saturday 01 November 2025 13:06:49 +0000 (0:00:00.571) 0:00:16.259 ***** 2025-11-01 13:08:53.597147 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.597158 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.597168 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:08:53.597179 | orchestrator | 2025-11-01 13:08:53.597190 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-11-01 13:08:53.597201 | orchestrator | Saturday 01 November 2025 13:06:50 +0000 (0:00:00.371) 0:00:16.631 ***** 2025-11-01 13:08:53.597211 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.597284 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.597296 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:08:53.597307 | orchestrator | 2025-11-01 13:08:53.597318 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-11-01 13:08:53.597329 | orchestrator | Saturday 01 November 2025 13:06:50 +0000 (0:00:00.368) 0:00:17.000 ***** 2025-11-01 13:08:53.597340 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.597351 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.597362 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:08:53.597372 | orchestrator | 2025-11-01 13:08:53.597383 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-11-01 13:08:53.597428 | orchestrator | Saturday 01 November 2025 13:06:51 +0000 (0:00:00.340) 0:00:17.341 ***** 2025-11-01 13:08:53.597440 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.597450 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.597459 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:08:53.597469 | orchestrator | 2025-11-01 13:08:53.597479 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-11-01 13:08:53.597488 | orchestrator | Saturday 01 November 2025 13:06:51 +0000 (0:00:00.562) 0:00:17.904 ***** 2025-11-01 13:08:53.597499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d83d2135--3529--5759--9738--6f5d85bcdaef-osd--block--d83d2135--3529--5759--9738--6f5d85bcdaef', 'dm-uuid-LVM-fDTpWClBjZ9Us4p8lfhtANZK4vLC820taYmvievWGuCmwhc1CUqHSku1d3Bz3JoN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.597510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2d34deeb--c147--51f6--865b--40ba131b62ad-osd--block--2d34deeb--c147--51f6--865b--40ba131b62ad', 'dm-uuid-LVM-pdVlv3KVvcnVqMGiaDwkB46kRxQvUgkdTSSCTkG3pO8o7wfA4qNO43l3AMC7123p'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.597533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.597544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.597555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.597625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.597638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.597673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.597685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.597695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.597705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--277f9d3d--0c20--556e--833f--7bea0f2408d1-osd--block--277f9d3d--0c20--556e--833f--7bea0f2408d1', 'dm-uuid-LVM-ehIfJlPiGc2uigZOsqopqiFLANEOix1Xdj3JmMubFLusgjuIjBl2BirrwsTyULbt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.597731 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11', 'scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part1', 'scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part14', 'scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part15', 'scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part16', 'scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:08:53.597769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--780930f3--bf13--5252--a15a--5f9f469ca774-osd--block--780930f3--bf13--5252--a15a--5f9f469ca774', 'dm-uuid-LVM-r8tx29ZMlQVBBI2chiCFB5cyO1pdwz8LLdk2mRfOHl9NPN3EYVCijdNqhnTrpmNE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.597782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d83d2135--3529--5759--9738--6f5d85bcdaef-osd--block--d83d2135--3529--5759--9738--6f5d85bcdaef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yXwX0t-f9iC-iCcI-yN1X-tJVr-wlAT-2Kim49', 'scsi-0QEMU_QEMU_HARDDISK_6d5232a6-49c3-4ba2-8072-69b94c6f6826', 'scsi-SQEMU_QEMU_HARDDISK_6d5232a6-49c3-4ba2-8072-69b94c6f6826'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:08:53.597794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.597815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2d34deeb--c147--51f6--865b--40ba131b62ad-osd--block--2d34deeb--c147--51f6--865b--40ba131b62ad'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i4h7F2-YliL-SO9h-Csgw-3CJZ-dMHf-khJwX8', 'scsi-0QEMU_QEMU_HARDDISK_caf17145-8e33-4113-9dc7-3e1268f339ef', 'scsi-SQEMU_QEMU_HARDDISK_caf17145-8e33-4113-9dc7-3e1268f339ef'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:08:53.597826 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.597837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b36ff255-a328-4794-8843-53478b92bf6f', 'scsi-SQEMU_QEMU_HARDDISK_b36ff255-a328-4794-8843-53478b92bf6f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:08:53.597848 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.597883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-12-08-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:08:53.597895 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.597905 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.597915 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.597925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.597952 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.597962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.597979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df', 'scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part1', 'scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part14', 'scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part15', 'scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part16', 'scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:08:53.597991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--277f9d3d--0c20--556e--833f--7bea0f2408d1-osd--block--277f9d3d--0c20--556e--833f--7bea0f2408d1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EUQzMZ-PMCb-pPIT-wNZG-jNnZ-MeqS-hGDXl3', 'scsi-0QEMU_QEMU_HARDDISK_5ce69623-bff4-4254-af6b-7ef1616921db', 'scsi-SQEMU_QEMU_HARDDISK_5ce69623-bff4-4254-af6b-7ef1616921db'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:08:53.598008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fea132eb--9454--553c--8b4e--faa263198857-osd--block--fea132eb--9454--553c--8b4e--faa263198857', 'dm-uuid-LVM-TGwqzibQaZzWxavZ7bwpJb5Bm19fuc8dzN6cYcJaRnPY0G7QShibxCft9nmjmY1y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.598073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--780930f3--bf13--5252--a15a--5f9f469ca774-osd--block--780930f3--bf13--5252--a15a--5f9f469ca774'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EP6xe2-EOUm-XIoM-j2QM-F2TH-29vI-sqP21Z', 'scsi-0QEMU_QEMU_HARDDISK_0d74391b-0b8f-495c-a577-c6c4d7ebf805', 'scsi-SQEMU_QEMU_HARDDISK_0d74391b-0b8f-495c-a577-c6c4d7ebf805'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:08:53.598088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1e995aa1--0e3d--5a0e--8d57--e00715a81a73-osd--block--1e995aa1--0e3d--5a0e--8d57--e00715a81a73', 'dm-uuid-LVM-oMXBBs41xleAnydzHhAkrMLMr1a9xjvhy4VgzGoZt3LJiPBQovI0rmeIzq1Qo46Z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.598098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce385ad4-e039-43b9-b94b-c72aec6ecf03', 'scsi-SQEMU_QEMU_HARDDISK_ce385ad4-e039-43b9-b94b-c72aec6ecf03'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:08:53.598108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.598127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-12-08-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:08:53.598137 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.598154 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.598164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.598179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.598189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.598199 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.598209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.598245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-11-01 13:08:53.598276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a', 'scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part1', 'scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part14', 'scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part15', 'scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part16', 'scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:08:53.598309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--fea132eb--9454--553c--8b4e--faa263198857-osd--block--fea132eb--9454--553c--8b4e--faa263198857'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ogQA8B-7r4e-vqdN-CHaY-teu3-lKEI-fsce3l', 'scsi-0QEMU_QEMU_HARDDISK_7bcb822d-7f3b-4eca-ac83-a3c6a6b727aa', 'scsi-SQEMU_QEMU_HARDDISK_7bcb822d-7f3b-4eca-ac83-a3c6a6b727aa'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:08:53.598323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1e995aa1--0e3d--5a0e--8d57--e00715a81a73-osd--block--1e995aa1--0e3d--5a0e--8d57--e00715a81a73'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MqJV1c-Heey-RafL-3X6C-WDDi-I2Rp-LWSwHf', 'scsi-0QEMU_QEMU_HARDDISK_bacac2a1-f096-4371-9863-988edf40b0d8', 'scsi-SQEMU_QEMU_HARDDISK_bacac2a1-f096-4371-9863-988edf40b0d8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:08:53.598335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_79b0442c-a1d2-4926-aa81-9c91c373f6dc', 'scsi-SQEMU_QEMU_HARDDISK_79b0442c-a1d2-4926-aa81-9c91c373f6dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:08:53.598352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-12-08-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-11-01 13:08:53.598365 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:08:53.598376 | orchestrator | 2025-11-01 13:08:53.598387 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-11-01 13:08:53.598405 | orchestrator | Saturday 01 November 2025 13:06:52 +0000 (0:00:00.607) 0:00:18.511 ***** 2025-11-01 13:08:53.598417 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d83d2135--3529--5759--9738--6f5d85bcdaef-osd--block--d83d2135--3529--5759--9738--6f5d85bcdaef', 'dm-uuid-LVM-fDTpWClBjZ9Us4p8lfhtANZK4vLC820taYmvievWGuCmwhc1CUqHSku1d3Bz3JoN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598435 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2d34deeb--c147--51f6--865b--40ba131b62ad-osd--block--2d34deeb--c147--51f6--865b--40ba131b62ad', 'dm-uuid-LVM-pdVlv3KVvcnVqMGiaDwkB46kRxQvUgkdTSSCTkG3pO8o7wfA4qNO43l3AMC7123p'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598447 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598459 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598471 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598489 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598507 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598519 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598535 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598547 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--277f9d3d--0c20--556e--833f--7bea0f2408d1-osd--block--277f9d3d--0c20--556e--833f--7bea0f2408d1', 'dm-uuid-LVM-ehIfJlPiGc2uigZOsqopqiFLANEOix1Xdj3JmMubFLusgjuIjBl2BirrwsTyULbt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598559 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598577 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--780930f3--bf13--5252--a15a--5f9f469ca774-osd--block--780930f3--bf13--5252--a15a--5f9f469ca774', 'dm-uuid-LVM-r8tx29ZMlQVBBI2chiCFB5cyO1pdwz8LLdk2mRfOHl9NPN3EYVCijdNqhnTrpmNE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598602 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11', 'scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part1', 'scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part14', 'scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part15', 'scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part16', 'scsi-SQEMU_QEMU_HARDDISK_c58f6bdd-66dc-4844-a9dc-254d04287c11-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598616 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598627 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d83d2135--3529--5759--9738--6f5d85bcdaef-osd--block--d83d2135--3529--5759--9738--6f5d85bcdaef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yXwX0t-f9iC-iCcI-yN1X-tJVr-wlAT-2Kim49', 'scsi-0QEMU_QEMU_HARDDISK_6d5232a6-49c3-4ba2-8072-69b94c6f6826', 'scsi-SQEMU_QEMU_HARDDISK_6d5232a6-49c3-4ba2-8072-69b94c6f6826'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598650 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598661 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2d34deeb--c147--51f6--865b--40ba131b62ad-osd--block--2d34deeb--c147--51f6--865b--40ba131b62ad'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-i4h7F2-YliL-SO9h-Csgw-3CJZ-dMHf-khJwX8', 'scsi-0QEMU_QEMU_HARDDISK_caf17145-8e33-4113-9dc7-3e1268f339ef', 'scsi-SQEMU_QEMU_HARDDISK_caf17145-8e33-4113-9dc7-3e1268f339ef'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598771 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598832 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b36ff255-a328-4794-8843-53478b92bf6f', 'scsi-SQEMU_QEMU_HARDDISK_b36ff255-a328-4794-8843-53478b92bf6f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598844 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598860 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-12-08-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598882 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598893 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.598903 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598918 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598929 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598947 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df', 'scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part1', 'scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part14', 'scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part15', 'scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part16', 'scsi-SQEMU_QEMU_HARDDISK_5c5fea78-3082-490b-be73-37826e0214df-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598969 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fea132eb--9454--553c--8b4e--faa263198857-osd--block--fea132eb--9454--553c--8b4e--faa263198857', 'dm-uuid-LVM-TGwqzibQaZzWxavZ7bwpJb5Bm19fuc8dzN6cYcJaRnPY0G7QShibxCft9nmjmY1y'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598979 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--277f9d3d--0c20--556e--833f--7bea0f2408d1-osd--block--277f9d3d--0c20--556e--833f--7bea0f2408d1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EUQzMZ-PMCb-pPIT-wNZG-jNnZ-MeqS-hGDXl3', 'scsi-0QEMU_QEMU_HARDDISK_5ce69623-bff4-4254-af6b-7ef1616921db', 'scsi-SQEMU_QEMU_HARDDISK_5ce69623-bff4-4254-af6b-7ef1616921db'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.598991 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--780930f3--bf13--5252--a15a--5f9f469ca774-osd--block--780930f3--bf13--5252--a15a--5f9f469ca774'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-EP6xe2-EOUm-XIoM-j2QM-F2TH-29vI-sqP21Z', 'scsi-0QEMU_QEMU_HARDDISK_0d74391b-0b8f-495c-a577-c6c4d7ebf805', 'scsi-SQEMU_QEMU_HARDDISK_0d74391b-0b8f-495c-a577-c6c4d7ebf805'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.599013 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1e995aa1--0e3d--5a0e--8d57--e00715a81a73-osd--block--1e995aa1--0e3d--5a0e--8d57--e00715a81a73', 'dm-uuid-LVM-oMXBBs41xleAnydzHhAkrMLMr1a9xjvhy4VgzGoZt3LJiPBQovI0rmeIzq1Qo46Z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.599024 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce385ad4-e039-43b9-b94b-c72aec6ecf03', 'scsi-SQEMU_QEMU_HARDDISK_ce385ad4-e039-43b9-b94b-c72aec6ecf03'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.599039 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.599050 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-12-08-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.599060 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.599070 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.599081 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.599102 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.599113 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.599123 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.599139 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.599149 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.599166 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a', 'scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part1', 'scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part14', 'scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part15', 'scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part16', 'scsi-SQEMU_QEMU_HARDDISK_5acedd85-43bb-4c6d-8618-5f6f37d1f29a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.599183 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--fea132eb--9454--553c--8b4e--faa263198857-osd--block--fea132eb--9454--553c--8b4e--faa263198857'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ogQA8B-7r4e-vqdN-CHaY-teu3-lKEI-fsce3l', 'scsi-0QEMU_QEMU_HARDDISK_7bcb822d-7f3b-4eca-ac83-a3c6a6b727aa', 'scsi-SQEMU_QEMU_HARDDISK_7bcb822d-7f3b-4eca-ac83-a3c6a6b727aa'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.599193 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1e995aa1--0e3d--5a0e--8d57--e00715a81a73-osd--block--1e995aa1--0e3d--5a0e--8d57--e00715a81a73'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MqJV1c-Heey-RafL-3X6C-WDDi-I2Rp-LWSwHf', 'scsi-0QEMU_QEMU_HARDDISK_bacac2a1-f096-4371-9863-988edf40b0d8', 'scsi-SQEMU_QEMU_HARDDISK_bacac2a1-f096-4371-9863-988edf40b0d8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.599306 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_79b0442c-a1d2-4926-aa81-9c91c373f6dc', 'scsi-SQEMU_QEMU_HARDDISK_79b0442c-a1d2-4926-aa81-9c91c373f6dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.599344 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-11-01-12-08-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-11-01 13:08:53.599355 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:08:53.599366 | orchestrator | 2025-11-01 13:08:53.599375 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-11-01 13:08:53.599385 | orchestrator | Saturday 01 November 2025 13:06:52 +0000 (0:00:00.707) 0:00:19.218 ***** 2025-11-01 13:08:53.599395 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:08:53.599406 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:08:53.599415 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:08:53.599425 | orchestrator | 2025-11-01 13:08:53.599434 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-11-01 13:08:53.599444 | orchestrator | Saturday 01 November 2025 13:06:53 +0000 (0:00:00.720) 0:00:19.939 ***** 2025-11-01 13:08:53.599454 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:08:53.599464 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:08:53.599476 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:08:53.599486 | orchestrator | 2025-11-01 13:08:53.599498 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-11-01 13:08:53.599509 | orchestrator | Saturday 01 November 2025 13:06:54 +0000 (0:00:00.567) 0:00:20.507 ***** 2025-11-01 13:08:53.599520 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:08:53.599531 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:08:53.599542 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:08:53.599553 | orchestrator | 2025-11-01 13:08:53.599564 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-11-01 13:08:53.599576 | orchestrator | Saturday 01 November 2025 13:06:54 +0000 (0:00:00.671) 0:00:21.178 ***** 2025-11-01 13:08:53.599587 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.599599 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.599610 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:08:53.599621 | orchestrator | 2025-11-01 13:08:53.599633 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-11-01 13:08:53.599644 | orchestrator | Saturday 01 November 2025 13:06:55 +0000 (0:00:00.312) 0:00:21.490 ***** 2025-11-01 13:08:53.599655 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.599667 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.599678 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:08:53.599688 | orchestrator | 2025-11-01 13:08:53.599705 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-11-01 13:08:53.599717 | orchestrator | Saturday 01 November 2025 13:06:55 +0000 (0:00:00.447) 0:00:21.938 ***** 2025-11-01 13:08:53.599728 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.599739 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.599756 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:08:53.599767 | orchestrator | 2025-11-01 13:08:53.599777 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-11-01 13:08:53.599789 | orchestrator | Saturday 01 November 2025 13:06:56 +0000 (0:00:00.582) 0:00:22.520 ***** 2025-11-01 13:08:53.599800 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-11-01 13:08:53.599812 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-11-01 13:08:53.599823 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-11-01 13:08:53.599833 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-11-01 13:08:53.599842 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-11-01 13:08:53.599852 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-11-01 13:08:53.599862 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-11-01 13:08:53.599871 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-11-01 13:08:53.599881 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-11-01 13:08:53.599890 | orchestrator | 2025-11-01 13:08:53.599900 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-11-01 13:08:53.599909 | orchestrator | Saturday 01 November 2025 13:06:57 +0000 (0:00:01.031) 0:00:23.552 ***** 2025-11-01 13:08:53.599919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-11-01 13:08:53.599929 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-11-01 13:08:53.599938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-11-01 13:08:53.599948 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.599957 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-11-01 13:08:53.599967 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-11-01 13:08:53.599976 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-11-01 13:08:53.599986 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.599995 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-11-01 13:08:53.600005 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-11-01 13:08:53.600014 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-11-01 13:08:53.600024 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:08:53.600033 | orchestrator | 2025-11-01 13:08:53.600043 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-11-01 13:08:53.600053 | orchestrator | Saturday 01 November 2025 13:06:57 +0000 (0:00:00.469) 0:00:24.022 ***** 2025-11-01 13:08:53.600063 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:08:53.600073 | orchestrator | 2025-11-01 13:08:53.600082 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-11-01 13:08:53.600093 | orchestrator | Saturday 01 November 2025 13:06:58 +0000 (0:00:00.840) 0:00:24.862 ***** 2025-11-01 13:08:53.600103 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.600112 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.600122 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:08:53.600132 | orchestrator | 2025-11-01 13:08:53.600146 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-11-01 13:08:53.600156 | orchestrator | Saturday 01 November 2025 13:06:58 +0000 (0:00:00.346) 0:00:25.209 ***** 2025-11-01 13:08:53.600166 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.600175 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.600185 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:08:53.600195 | orchestrator | 2025-11-01 13:08:53.600204 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-11-01 13:08:53.600214 | orchestrator | Saturday 01 November 2025 13:06:59 +0000 (0:00:00.355) 0:00:25.564 ***** 2025-11-01 13:08:53.600284 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.600302 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.600312 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:08:53.600321 | orchestrator | 2025-11-01 13:08:53.600331 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-11-01 13:08:53.600341 | orchestrator | Saturday 01 November 2025 13:06:59 +0000 (0:00:00.348) 0:00:25.913 ***** 2025-11-01 13:08:53.600350 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:08:53.600360 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:08:53.600370 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:08:53.600379 | orchestrator | 2025-11-01 13:08:53.600389 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-11-01 13:08:53.600399 | orchestrator | Saturday 01 November 2025 13:07:00 +0000 (0:00:00.713) 0:00:26.627 ***** 2025-11-01 13:08:53.600408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 13:08:53.600418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 13:08:53.600427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 13:08:53.600437 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.600447 | orchestrator | 2025-11-01 13:08:53.600456 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-11-01 13:08:53.600466 | orchestrator | Saturday 01 November 2025 13:07:00 +0000 (0:00:00.452) 0:00:27.079 ***** 2025-11-01 13:08:53.600475 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 13:08:53.600485 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 13:08:53.600494 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 13:08:53.600504 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.600514 | orchestrator | 2025-11-01 13:08:53.600523 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-11-01 13:08:53.600537 | orchestrator | Saturday 01 November 2025 13:07:01 +0000 (0:00:00.395) 0:00:27.475 ***** 2025-11-01 13:08:53.600547 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-11-01 13:08:53.600557 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-11-01 13:08:53.600566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-11-01 13:08:53.600576 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.600585 | orchestrator | 2025-11-01 13:08:53.600595 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-11-01 13:08:53.600604 | orchestrator | Saturday 01 November 2025 13:07:01 +0000 (0:00:00.411) 0:00:27.886 ***** 2025-11-01 13:08:53.600613 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:08:53.600621 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:08:53.600628 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:08:53.600636 | orchestrator | 2025-11-01 13:08:53.600644 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-11-01 13:08:53.600652 | orchestrator | Saturday 01 November 2025 13:07:01 +0000 (0:00:00.372) 0:00:28.259 ***** 2025-11-01 13:08:53.600660 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-11-01 13:08:53.600668 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-11-01 13:08:53.600676 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-11-01 13:08:53.600683 | orchestrator | 2025-11-01 13:08:53.600691 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-11-01 13:08:53.600699 | orchestrator | Saturday 01 November 2025 13:07:02 +0000 (0:00:00.564) 0:00:28.823 ***** 2025-11-01 13:08:53.600707 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-01 13:08:53.600715 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-01 13:08:53.600723 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-01 13:08:53.600731 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-11-01 13:08:53.600739 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-11-01 13:08:53.600747 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-11-01 13:08:53.600759 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-11-01 13:08:53.600767 | orchestrator | 2025-11-01 13:08:53.600775 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-11-01 13:08:53.600783 | orchestrator | Saturday 01 November 2025 13:07:03 +0000 (0:00:01.253) 0:00:30.077 ***** 2025-11-01 13:08:53.600791 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-11-01 13:08:53.600798 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-11-01 13:08:53.600806 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-11-01 13:08:53.600814 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-11-01 13:08:53.600822 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-11-01 13:08:53.600830 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-11-01 13:08:53.600838 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-11-01 13:08:53.600845 | orchestrator | 2025-11-01 13:08:53.600858 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-11-01 13:08:53.600866 | orchestrator | Saturday 01 November 2025 13:07:06 +0000 (0:00:02.299) 0:00:32.377 ***** 2025-11-01 13:08:53.600874 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:08:53.600882 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:08:53.600890 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-11-01 13:08:53.600897 | orchestrator | 2025-11-01 13:08:53.600905 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-11-01 13:08:53.600913 | orchestrator | Saturday 01 November 2025 13:07:06 +0000 (0:00:00.439) 0:00:32.816 ***** 2025-11-01 13:08:53.600922 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-01 13:08:53.600931 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-01 13:08:53.600940 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-01 13:08:53.600948 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-01 13:08:53.600960 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-11-01 13:08:53.600968 | orchestrator | 2025-11-01 13:08:53.600976 | orchestrator | TASK [generate keys] *********************************************************** 2025-11-01 13:08:53.600984 | orchestrator | Saturday 01 November 2025 13:07:54 +0000 (0:00:48.429) 0:01:21.246 ***** 2025-11-01 13:08:53.600992 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:08:53.601000 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:08:53.601013 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:08:53.601020 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:08:53.601028 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:08:53.601036 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:08:53.601044 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-11-01 13:08:53.601052 | orchestrator | 2025-11-01 13:08:53.601060 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-11-01 13:08:53.601068 | orchestrator | Saturday 01 November 2025 13:08:20 +0000 (0:00:25.822) 0:01:47.068 ***** 2025-11-01 13:08:53.601075 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:08:53.601083 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:08:53.601091 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:08:53.601099 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:08:53.601106 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:08:53.601114 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:08:53.601122 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-11-01 13:08:53.601130 | orchestrator | 2025-11-01 13:08:53.601138 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-11-01 13:08:53.601146 | orchestrator | Saturday 01 November 2025 13:08:33 +0000 (0:00:13.023) 0:02:00.092 ***** 2025-11-01 13:08:53.601154 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:08:53.601162 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-01 13:08:53.601170 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-01 13:08:53.601177 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:08:53.601185 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-01 13:08:53.601193 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-01 13:08:53.601207 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:08:53.601215 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-01 13:08:53.601239 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-01 13:08:53.601247 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:08:53.601255 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-01 13:08:53.601263 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-01 13:08:53.601271 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:08:53.601279 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-01 13:08:53.601287 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-01 13:08:53.601294 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-11-01 13:08:53.601302 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-11-01 13:08:53.601310 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-11-01 13:08:53.601318 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-11-01 13:08:53.601326 | orchestrator | 2025-11-01 13:08:53.601334 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:08:53.601348 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-11-01 13:08:53.601357 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-11-01 13:08:53.601365 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-11-01 13:08:53.601373 | orchestrator | 2025-11-01 13:08:53.601381 | orchestrator | 2025-11-01 13:08:53.601389 | orchestrator | 2025-11-01 13:08:53.601396 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:08:53.601420 | orchestrator | Saturday 01 November 2025 13:08:51 +0000 (0:00:18.046) 0:02:18.139 ***** 2025-11-01 13:08:53.601429 | orchestrator | =============================================================================== 2025-11-01 13:08:53.601436 | orchestrator | create openstack pool(s) ----------------------------------------------- 48.43s 2025-11-01 13:08:53.601444 | orchestrator | generate keys ---------------------------------------------------------- 25.82s 2025-11-01 13:08:53.601452 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.05s 2025-11-01 13:08:53.601460 | orchestrator | get keys from monitors ------------------------------------------------- 13.02s 2025-11-01 13:08:53.601468 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.30s 2025-11-01 13:08:53.601476 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.27s 2025-11-01 13:08:53.601483 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.74s 2025-11-01 13:08:53.601491 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.25s 2025-11-01 13:08:53.601499 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.03s 2025-11-01 13:08:53.601507 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.92s 2025-11-01 13:08:53.601515 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.88s 2025-11-01 13:08:53.601522 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.84s 2025-11-01 13:08:53.601530 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.78s 2025-11-01 13:08:53.601538 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.74s 2025-11-01 13:08:53.601546 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.72s 2025-11-01 13:08:53.601554 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.71s 2025-11-01 13:08:53.601561 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.71s 2025-11-01 13:08:53.601569 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.69s 2025-11-01 13:08:53.601577 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.67s 2025-11-01 13:08:53.601585 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.67s 2025-11-01 13:08:53.601593 | orchestrator | 2025-11-01 13:08:53 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:08:53.601601 | orchestrator | 2025-11-01 13:08:53 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:08:53.601609 | orchestrator | 2025-11-01 13:08:53 | INFO  | Task 4f3c1c4f-a7f7-4be5-891b-f10215267888 is in state STARTED 2025-11-01 13:08:53.601617 | orchestrator | 2025-11-01 13:08:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:08:56.641818 | orchestrator | 2025-11-01 13:08:56 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:08:56.642718 | orchestrator | 2025-11-01 13:08:56 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:08:56.643440 | orchestrator | 2025-11-01 13:08:56 | INFO  | Task 4f3c1c4f-a7f7-4be5-891b-f10215267888 is in state STARTED 2025-11-01 13:08:56.643484 | orchestrator | 2025-11-01 13:08:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:08:59.682794 | orchestrator | 2025-11-01 13:08:59 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:08:59.683825 | orchestrator | 2025-11-01 13:08:59 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:08:59.684819 | orchestrator | 2025-11-01 13:08:59 | INFO  | Task 4f3c1c4f-a7f7-4be5-891b-f10215267888 is in state STARTED 2025-11-01 13:08:59.684842 | orchestrator | 2025-11-01 13:08:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:09:02.731539 | orchestrator | 2025-11-01 13:09:02 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:09:02.733482 | orchestrator | 2025-11-01 13:09:02 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:09:02.735170 | orchestrator | 2025-11-01 13:09:02 | INFO  | Task 4f3c1c4f-a7f7-4be5-891b-f10215267888 is in state STARTED 2025-11-01 13:09:02.735298 | orchestrator | 2025-11-01 13:09:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:09:05.777749 | orchestrator | 2025-11-01 13:09:05 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:09:05.778601 | orchestrator | 2025-11-01 13:09:05 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:09:05.781214 | orchestrator | 2025-11-01 13:09:05 | INFO  | Task 4f3c1c4f-a7f7-4be5-891b-f10215267888 is in state STARTED 2025-11-01 13:09:05.781290 | orchestrator | 2025-11-01 13:09:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:09:08.838737 | orchestrator | 2025-11-01 13:09:08 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:09:08.840728 | orchestrator | 2025-11-01 13:09:08 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:09:08.840759 | orchestrator | 2025-11-01 13:09:08 | INFO  | Task 4f3c1c4f-a7f7-4be5-891b-f10215267888 is in state STARTED 2025-11-01 13:09:08.840772 | orchestrator | 2025-11-01 13:09:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:09:11.879994 | orchestrator | 2025-11-01 13:09:11 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:09:11.883609 | orchestrator | 2025-11-01 13:09:11 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:09:11.885698 | orchestrator | 2025-11-01 13:09:11 | INFO  | Task 4f3c1c4f-a7f7-4be5-891b-f10215267888 is in state STARTED 2025-11-01 13:09:11.885880 | orchestrator | 2025-11-01 13:09:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:09:14.951130 | orchestrator | 2025-11-01 13:09:14 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:09:14.951255 | orchestrator | 2025-11-01 13:09:14 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:09:14.952356 | orchestrator | 2025-11-01 13:09:14 | INFO  | Task 4f3c1c4f-a7f7-4be5-891b-f10215267888 is in state STARTED 2025-11-01 13:09:14.952381 | orchestrator | 2025-11-01 13:09:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:09:17.991278 | orchestrator | 2025-11-01 13:09:17 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:09:17.991750 | orchestrator | 2025-11-01 13:09:17 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:09:17.993510 | orchestrator | 2025-11-01 13:09:17 | INFO  | Task 4f3c1c4f-a7f7-4be5-891b-f10215267888 is in state STARTED 2025-11-01 13:09:17.993562 | orchestrator | 2025-11-01 13:09:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:09:21.035313 | orchestrator | 2025-11-01 13:09:21 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:09:21.036793 | orchestrator | 2025-11-01 13:09:21 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:09:21.038406 | orchestrator | 2025-11-01 13:09:21 | INFO  | Task 4f3c1c4f-a7f7-4be5-891b-f10215267888 is in state STARTED 2025-11-01 13:09:21.038431 | orchestrator | 2025-11-01 13:09:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:09:24.087733 | orchestrator | 2025-11-01 13:09:24 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:09:24.089410 | orchestrator | 2025-11-01 13:09:24 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:09:24.091733 | orchestrator | 2025-11-01 13:09:24 | INFO  | Task 4f3c1c4f-a7f7-4be5-891b-f10215267888 is in state STARTED 2025-11-01 13:09:24.091765 | orchestrator | 2025-11-01 13:09:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:09:27.141966 | orchestrator | 2025-11-01 13:09:27 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:09:27.142868 | orchestrator | 2025-11-01 13:09:27 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:09:27.144581 | orchestrator | 2025-11-01 13:09:27 | INFO  | Task 4f3c1c4f-a7f7-4be5-891b-f10215267888 is in state STARTED 2025-11-01 13:09:27.144604 | orchestrator | 2025-11-01 13:09:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:09:30.197380 | orchestrator | 2025-11-01 13:09:30 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:09:30.198306 | orchestrator | 2025-11-01 13:09:30 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:09:30.199473 | orchestrator | 2025-11-01 13:09:30 | INFO  | Task 4f3c1c4f-a7f7-4be5-891b-f10215267888 is in state STARTED 2025-11-01 13:09:30.199782 | orchestrator | 2025-11-01 13:09:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:09:33.248618 | orchestrator | 2025-11-01 13:09:33 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:09:33.249489 | orchestrator | 2025-11-01 13:09:33 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:09:33.250772 | orchestrator | 2025-11-01 13:09:33 | INFO  | Task 4f3c1c4f-a7f7-4be5-891b-f10215267888 is in state STARTED 2025-11-01 13:09:33.250800 | orchestrator | 2025-11-01 13:09:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:09:36.298311 | orchestrator | 2025-11-01 13:09:36 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:09:36.302835 | orchestrator | 2025-11-01 13:09:36 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:09:36.304216 | orchestrator | 2025-11-01 13:09:36 | INFO  | Task 4f3c1c4f-a7f7-4be5-891b-f10215267888 is in state STARTED 2025-11-01 13:09:36.305879 | orchestrator | 2025-11-01 13:09:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:09:39.352084 | orchestrator | 2025-11-01 13:09:39 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:09:39.352710 | orchestrator | 2025-11-01 13:09:39 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:09:39.354391 | orchestrator | 2025-11-01 13:09:39 | INFO  | Task 611ffeee-534a-4628-aa5b-2ec4e3c83484 is in state STARTED 2025-11-01 13:09:39.355553 | orchestrator | 2025-11-01 13:09:39 | INFO  | Task 4f3c1c4f-a7f7-4be5-891b-f10215267888 is in state SUCCESS 2025-11-01 13:09:39.355602 | orchestrator | 2025-11-01 13:09:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:09:42.395855 | orchestrator | 2025-11-01 13:09:42 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:09:42.397395 | orchestrator | 2025-11-01 13:09:42 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:09:42.398855 | orchestrator | 2025-11-01 13:09:42 | INFO  | Task 611ffeee-534a-4628-aa5b-2ec4e3c83484 is in state STARTED 2025-11-01 13:09:42.398879 | orchestrator | 2025-11-01 13:09:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:09:45.440019 | orchestrator | 2025-11-01 13:09:45 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:09:45.442176 | orchestrator | 2025-11-01 13:09:45 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:09:45.443611 | orchestrator | 2025-11-01 13:09:45 | INFO  | Task 611ffeee-534a-4628-aa5b-2ec4e3c83484 is in state STARTED 2025-11-01 13:09:45.443647 | orchestrator | 2025-11-01 13:09:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:09:48.492583 | orchestrator | 2025-11-01 13:09:48 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:09:48.492650 | orchestrator | 2025-11-01 13:09:48 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:09:48.493229 | orchestrator | 2025-11-01 13:09:48 | INFO  | Task 611ffeee-534a-4628-aa5b-2ec4e3c83484 is in state STARTED 2025-11-01 13:09:48.493270 | orchestrator | 2025-11-01 13:09:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:09:51.541552 | orchestrator | 2025-11-01 13:09:51 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:09:51.544169 | orchestrator | 2025-11-01 13:09:51 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:09:51.547391 | orchestrator | 2025-11-01 13:09:51 | INFO  | Task 611ffeee-534a-4628-aa5b-2ec4e3c83484 is in state STARTED 2025-11-01 13:09:51.547414 | orchestrator | 2025-11-01 13:09:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:09:54.580604 | orchestrator | 2025-11-01 13:09:54 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:09:54.582660 | orchestrator | 2025-11-01 13:09:54 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:09:54.583229 | orchestrator | 2025-11-01 13:09:54 | INFO  | Task 611ffeee-534a-4628-aa5b-2ec4e3c83484 is in state STARTED 2025-11-01 13:09:54.583302 | orchestrator | 2025-11-01 13:09:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:09:57.633975 | orchestrator | 2025-11-01 13:09:57 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:09:57.635767 | orchestrator | 2025-11-01 13:09:57 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:09:57.637380 | orchestrator | 2025-11-01 13:09:57 | INFO  | Task 611ffeee-534a-4628-aa5b-2ec4e3c83484 is in state STARTED 2025-11-01 13:09:57.637579 | orchestrator | 2025-11-01 13:09:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:10:00.671132 | orchestrator | 2025-11-01 13:10:00 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:10:00.672210 | orchestrator | 2025-11-01 13:10:00 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:10:00.673585 | orchestrator | 2025-11-01 13:10:00 | INFO  | Task 611ffeee-534a-4628-aa5b-2ec4e3c83484 is in state STARTED 2025-11-01 13:10:00.673632 | orchestrator | 2025-11-01 13:10:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:10:03.705590 | orchestrator | 2025-11-01 13:10:03 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:10:03.706897 | orchestrator | 2025-11-01 13:10:03 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:10:03.707799 | orchestrator | 2025-11-01 13:10:03 | INFO  | Task 611ffeee-534a-4628-aa5b-2ec4e3c83484 is in state STARTED 2025-11-01 13:10:03.707824 | orchestrator | 2025-11-01 13:10:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:10:06.756102 | orchestrator | 2025-11-01 13:10:06 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state STARTED 2025-11-01 13:10:06.757716 | orchestrator | 2025-11-01 13:10:06 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:10:06.759085 | orchestrator | 2025-11-01 13:10:06 | INFO  | Task 611ffeee-534a-4628-aa5b-2ec4e3c83484 is in state STARTED 2025-11-01 13:10:06.759111 | orchestrator | 2025-11-01 13:10:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:10:09.805318 | orchestrator | 2025-11-01 13:10:09.805402 | orchestrator | 2025-11-01 13:10:09.805415 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-11-01 13:10:09.805427 | orchestrator | 2025-11-01 13:10:09.805438 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2025-11-01 13:10:09.805450 | orchestrator | Saturday 01 November 2025 13:08:57 +0000 (0:00:00.181) 0:00:00.181 ***** 2025-11-01 13:10:09.805461 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-11-01 13:10:09.805474 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-01 13:10:09.805485 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-01 13:10:09.805496 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-11-01 13:10:09.805507 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-01 13:10:09.805518 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-11-01 13:10:09.805528 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-11-01 13:10:09.805540 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-11-01 13:10:09.805551 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-11-01 13:10:09.805561 | orchestrator | 2025-11-01 13:10:09.805573 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-11-01 13:10:09.805584 | orchestrator | Saturday 01 November 2025 13:09:03 +0000 (0:00:06.023) 0:00:06.204 ***** 2025-11-01 13:10:09.805595 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-11-01 13:10:09.805606 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-01 13:10:09.805616 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-01 13:10:09.805627 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-11-01 13:10:09.805638 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-11-01 13:10:09.805649 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-11-01 13:10:09.805660 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-11-01 13:10:09.805671 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-11-01 13:10:09.805704 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-11-01 13:10:09.805716 | orchestrator | 2025-11-01 13:10:09.805727 | orchestrator | TASK [Create share directory] ************************************************** 2025-11-01 13:10:09.805737 | orchestrator | Saturday 01 November 2025 13:09:08 +0000 (0:00:04.791) 0:00:10.996 ***** 2025-11-01 13:10:09.805749 | orchestrator | changed: [testbed-manager -> localhost] 2025-11-01 13:10:09.805760 | orchestrator | 2025-11-01 13:10:09.805771 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-11-01 13:10:09.805782 | orchestrator | Saturday 01 November 2025 13:09:09 +0000 (0:00:01.238) 0:00:12.234 ***** 2025-11-01 13:10:09.805793 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-11-01 13:10:09.805804 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-11-01 13:10:09.805815 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-11-01 13:10:09.805826 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-11-01 13:10:09.805839 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-11-01 13:10:09.805866 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-11-01 13:10:09.805879 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-11-01 13:10:09.805891 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-11-01 13:10:09.805904 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-11-01 13:10:09.805916 | orchestrator | 2025-11-01 13:10:09.805928 | orchestrator | TASK [Check if target directories exist] *************************************** 2025-11-01 13:10:09.805942 | orchestrator | Saturday 01 November 2025 13:09:25 +0000 (0:00:16.423) 0:00:28.657 ***** 2025-11-01 13:10:09.805954 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2025-11-01 13:10:09.805967 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2025-11-01 13:10:09.805981 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-11-01 13:10:09.805993 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-11-01 13:10:09.806071 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-11-01 13:10:09.806086 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-11-01 13:10:09.806099 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2025-11-01 13:10:09.806111 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2025-11-01 13:10:09.806123 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2025-11-01 13:10:09.806136 | orchestrator | 2025-11-01 13:10:09.806148 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-11-01 13:10:09.806161 | orchestrator | Saturday 01 November 2025 13:09:29 +0000 (0:00:03.271) 0:00:31.929 ***** 2025-11-01 13:10:09.806174 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-11-01 13:10:09.806186 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-11-01 13:10:09.806197 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-11-01 13:10:09.806208 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-11-01 13:10:09.806218 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-11-01 13:10:09.806229 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-11-01 13:10:09.806267 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-11-01 13:10:09.806278 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-11-01 13:10:09.806629 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-11-01 13:10:09.806653 | orchestrator | 2025-11-01 13:10:09.806665 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:10:09.806676 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:10:09.806688 | orchestrator | 2025-11-01 13:10:09.806699 | orchestrator | 2025-11-01 13:10:09.806709 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:10:09.806720 | orchestrator | Saturday 01 November 2025 13:09:36 +0000 (0:00:07.288) 0:00:39.218 ***** 2025-11-01 13:10:09.806731 | orchestrator | =============================================================================== 2025-11-01 13:10:09.806741 | orchestrator | Write ceph keys to the share directory --------------------------------- 16.42s 2025-11-01 13:10:09.806752 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.29s 2025-11-01 13:10:09.806762 | orchestrator | Check if ceph keys exist ------------------------------------------------ 6.02s 2025-11-01 13:10:09.806773 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.79s 2025-11-01 13:10:09.806783 | orchestrator | Check if target directories exist --------------------------------------- 3.27s 2025-11-01 13:10:09.806794 | orchestrator | Create share directory -------------------------------------------------- 1.24s 2025-11-01 13:10:09.806804 | orchestrator | 2025-11-01 13:10:09.806815 | orchestrator | 2025-11-01 13:10:09 | INFO  | Task 95039882-6fcf-434d-a732-4beeb862f25a is in state SUCCESS 2025-11-01 13:10:09.808045 | orchestrator | 2025-11-01 13:10:09.808080 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:10:09.808091 | orchestrator | 2025-11-01 13:10:09.808102 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:10:09.808113 | orchestrator | Saturday 01 November 2025 13:08:12 +0000 (0:00:00.287) 0:00:00.287 ***** 2025-11-01 13:10:09.808124 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:10:09.808136 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:10:09.808146 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:10:09.808157 | orchestrator | 2025-11-01 13:10:09.808168 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:10:09.808179 | orchestrator | Saturday 01 November 2025 13:08:12 +0000 (0:00:00.387) 0:00:00.674 ***** 2025-11-01 13:10:09.808190 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-11-01 13:10:09.808201 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-11-01 13:10:09.808212 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-11-01 13:10:09.808223 | orchestrator | 2025-11-01 13:10:09.808266 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-11-01 13:10:09.808278 | orchestrator | 2025-11-01 13:10:09.808288 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-11-01 13:10:09.808299 | orchestrator | Saturday 01 November 2025 13:08:13 +0000 (0:00:00.476) 0:00:01.151 ***** 2025-11-01 13:10:09.808310 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:10:09.808321 | orchestrator | 2025-11-01 13:10:09.808332 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-11-01 13:10:09.808342 | orchestrator | Saturday 01 November 2025 13:08:13 +0000 (0:00:00.548) 0:00:01.700 ***** 2025-11-01 13:10:09.808360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 13:10:09.808562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 13:10:09.808593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 13:10:09.808606 | orchestrator | 2025-11-01 13:10:09.808617 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-11-01 13:10:09.808629 | orchestrator | Saturday 01 November 2025 13:08:15 +0000 (0:00:01.448) 0:00:03.148 ***** 2025-11-01 13:10:09.808639 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:10:09.808650 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:10:09.808661 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:10:09.808672 | orchestrator | 2025-11-01 13:10:09.808682 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-11-01 13:10:09.808693 | orchestrator | Saturday 01 November 2025 13:08:15 +0000 (0:00:00.511) 0:00:03.659 ***** 2025-11-01 13:10:09.808704 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-11-01 13:10:09.808721 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-11-01 13:10:09.808732 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-11-01 13:10:09.808743 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-11-01 13:10:09.808753 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-11-01 13:10:09.808764 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-11-01 13:10:09.808775 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-11-01 13:10:09.808785 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-11-01 13:10:09.808796 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-11-01 13:10:09.808812 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-11-01 13:10:09.808823 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-11-01 13:10:09.808840 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-11-01 13:10:09.808851 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-11-01 13:10:09.808862 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-11-01 13:10:09.808872 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-11-01 13:10:09.808883 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-11-01 13:10:09.808893 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-11-01 13:10:09.808904 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-11-01 13:10:09.808915 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-11-01 13:10:09.808925 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-11-01 13:10:09.808936 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-11-01 13:10:09.808947 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-11-01 13:10:09.808957 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-11-01 13:10:09.808968 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-11-01 13:10:09.808979 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-11-01 13:10:09.808992 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-11-01 13:10:09.809003 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-11-01 13:10:09.809014 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-11-01 13:10:09.809025 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-11-01 13:10:09.809035 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-11-01 13:10:09.809046 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-11-01 13:10:09.809057 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-11-01 13:10:09.809068 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-11-01 13:10:09.809079 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-11-01 13:10:09.809089 | orchestrator | 2025-11-01 13:10:09.809100 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-01 13:10:09.809110 | orchestrator | Saturday 01 November 2025 13:08:16 +0000 (0:00:00.890) 0:00:04.550 ***** 2025-11-01 13:10:09.809121 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:10:09.809132 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:10:09.809143 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:10:09.809153 | orchestrator | 2025-11-01 13:10:09.809164 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-01 13:10:09.809177 | orchestrator | Saturday 01 November 2025 13:08:17 +0000 (0:00:00.325) 0:00:04.875 ***** 2025-11-01 13:10:09.809196 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.809208 | orchestrator | 2025-11-01 13:10:09.809225 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-01 13:10:09.809265 | orchestrator | Saturday 01 November 2025 13:08:17 +0000 (0:00:00.134) 0:00:05.009 ***** 2025-11-01 13:10:09.809277 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.809290 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:10:09.809302 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:10:09.809314 | orchestrator | 2025-11-01 13:10:09.809326 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-01 13:10:09.809338 | orchestrator | Saturday 01 November 2025 13:08:17 +0000 (0:00:00.500) 0:00:05.509 ***** 2025-11-01 13:10:09.809351 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:10:09.809363 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:10:09.809375 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:10:09.809387 | orchestrator | 2025-11-01 13:10:09.809399 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-01 13:10:09.809412 | orchestrator | Saturday 01 November 2025 13:08:18 +0000 (0:00:00.422) 0:00:05.932 ***** 2025-11-01 13:10:09.809429 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.809441 | orchestrator | 2025-11-01 13:10:09.809454 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-01 13:10:09.809466 | orchestrator | Saturday 01 November 2025 13:08:18 +0000 (0:00:00.148) 0:00:06.080 ***** 2025-11-01 13:10:09.809478 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.809491 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:10:09.809503 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:10:09.809515 | orchestrator | 2025-11-01 13:10:09.809528 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-01 13:10:09.809541 | orchestrator | Saturday 01 November 2025 13:08:18 +0000 (0:00:00.406) 0:00:06.487 ***** 2025-11-01 13:10:09.809552 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:10:09.809563 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:10:09.809573 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:10:09.809584 | orchestrator | 2025-11-01 13:10:09.809594 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-01 13:10:09.809605 | orchestrator | Saturday 01 November 2025 13:08:19 +0000 (0:00:00.357) 0:00:06.845 ***** 2025-11-01 13:10:09.809616 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.809626 | orchestrator | 2025-11-01 13:10:09.809637 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-01 13:10:09.809647 | orchestrator | Saturday 01 November 2025 13:08:19 +0000 (0:00:00.157) 0:00:07.002 ***** 2025-11-01 13:10:09.809658 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.809669 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:10:09.809679 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:10:09.809690 | orchestrator | 2025-11-01 13:10:09.809701 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-01 13:10:09.809711 | orchestrator | Saturday 01 November 2025 13:08:19 +0000 (0:00:00.604) 0:00:07.607 ***** 2025-11-01 13:10:09.809722 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:10:09.809733 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:10:09.809743 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:10:09.809754 | orchestrator | 2025-11-01 13:10:09.809765 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-01 13:10:09.809775 | orchestrator | Saturday 01 November 2025 13:08:20 +0000 (0:00:00.376) 0:00:07.983 ***** 2025-11-01 13:10:09.809786 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.809796 | orchestrator | 2025-11-01 13:10:09.809807 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-01 13:10:09.809818 | orchestrator | Saturday 01 November 2025 13:08:20 +0000 (0:00:00.136) 0:00:08.120 ***** 2025-11-01 13:10:09.809828 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.809839 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:10:09.809856 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:10:09.809867 | orchestrator | 2025-11-01 13:10:09.809878 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-01 13:10:09.809889 | orchestrator | Saturday 01 November 2025 13:08:20 +0000 (0:00:00.355) 0:00:08.476 ***** 2025-11-01 13:10:09.809899 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:10:09.809910 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:10:09.809920 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:10:09.809931 | orchestrator | 2025-11-01 13:10:09.809942 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-01 13:10:09.809952 | orchestrator | Saturday 01 November 2025 13:08:21 +0000 (0:00:00.565) 0:00:09.041 ***** 2025-11-01 13:10:09.809963 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.809974 | orchestrator | 2025-11-01 13:10:09.809984 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-01 13:10:09.809995 | orchestrator | Saturday 01 November 2025 13:08:21 +0000 (0:00:00.138) 0:00:09.179 ***** 2025-11-01 13:10:09.810006 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.810100 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:10:09.810212 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:10:09.810226 | orchestrator | 2025-11-01 13:10:09.810296 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-01 13:10:09.810308 | orchestrator | Saturday 01 November 2025 13:08:21 +0000 (0:00:00.323) 0:00:09.503 ***** 2025-11-01 13:10:09.810319 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:10:09.810330 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:10:09.810341 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:10:09.810351 | orchestrator | 2025-11-01 13:10:09.810362 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-01 13:10:09.810372 | orchestrator | Saturday 01 November 2025 13:08:22 +0000 (0:00:00.365) 0:00:09.869 ***** 2025-11-01 13:10:09.810383 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.810394 | orchestrator | 2025-11-01 13:10:09.810404 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-01 13:10:09.810415 | orchestrator | Saturday 01 November 2025 13:08:22 +0000 (0:00:00.153) 0:00:10.023 ***** 2025-11-01 13:10:09.810426 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.810436 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:10:09.810447 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:10:09.810458 | orchestrator | 2025-11-01 13:10:09.810469 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-01 13:10:09.810489 | orchestrator | Saturday 01 November 2025 13:08:22 +0000 (0:00:00.299) 0:00:10.322 ***** 2025-11-01 13:10:09.810500 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:10:09.810510 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:10:09.810520 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:10:09.810529 | orchestrator | 2025-11-01 13:10:09.810539 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-01 13:10:09.810548 | orchestrator | Saturday 01 November 2025 13:08:23 +0000 (0:00:00.612) 0:00:10.935 ***** 2025-11-01 13:10:09.810558 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.810567 | orchestrator | 2025-11-01 13:10:09.810577 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-01 13:10:09.810587 | orchestrator | Saturday 01 November 2025 13:08:23 +0000 (0:00:00.143) 0:00:11.079 ***** 2025-11-01 13:10:09.810596 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.810606 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:10:09.810615 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:10:09.810625 | orchestrator | 2025-11-01 13:10:09.810635 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-01 13:10:09.810651 | orchestrator | Saturday 01 November 2025 13:08:23 +0000 (0:00:00.329) 0:00:11.408 ***** 2025-11-01 13:10:09.810661 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:10:09.810670 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:10:09.810680 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:10:09.810698 | orchestrator | 2025-11-01 13:10:09.810708 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-01 13:10:09.810717 | orchestrator | Saturday 01 November 2025 13:08:23 +0000 (0:00:00.336) 0:00:11.744 ***** 2025-11-01 13:10:09.810727 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.810736 | orchestrator | 2025-11-01 13:10:09.810746 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-01 13:10:09.810755 | orchestrator | Saturday 01 November 2025 13:08:24 +0000 (0:00:00.137) 0:00:11.881 ***** 2025-11-01 13:10:09.810765 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.810775 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:10:09.810784 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:10:09.810794 | orchestrator | 2025-11-01 13:10:09.810803 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-01 13:10:09.810813 | orchestrator | Saturday 01 November 2025 13:08:24 +0000 (0:00:00.354) 0:00:12.236 ***** 2025-11-01 13:10:09.810823 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:10:09.810832 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:10:09.810843 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:10:09.810855 | orchestrator | 2025-11-01 13:10:09.810867 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-01 13:10:09.810878 | orchestrator | Saturday 01 November 2025 13:08:25 +0000 (0:00:00.618) 0:00:12.855 ***** 2025-11-01 13:10:09.810888 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.810899 | orchestrator | 2025-11-01 13:10:09.810911 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-01 13:10:09.810922 | orchestrator | Saturday 01 November 2025 13:08:25 +0000 (0:00:00.139) 0:00:12.995 ***** 2025-11-01 13:10:09.810933 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.810944 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:10:09.810955 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:10:09.810966 | orchestrator | 2025-11-01 13:10:09.810977 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-11-01 13:10:09.810989 | orchestrator | Saturday 01 November 2025 13:08:25 +0000 (0:00:00.331) 0:00:13.327 ***** 2025-11-01 13:10:09.811000 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:10:09.811011 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:10:09.811022 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:10:09.811033 | orchestrator | 2025-11-01 13:10:09.811044 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-11-01 13:10:09.811055 | orchestrator | Saturday 01 November 2025 13:08:25 +0000 (0:00:00.351) 0:00:13.678 ***** 2025-11-01 13:10:09.811067 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.811077 | orchestrator | 2025-11-01 13:10:09.811088 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-11-01 13:10:09.811099 | orchestrator | Saturday 01 November 2025 13:08:26 +0000 (0:00:00.143) 0:00:13.822 ***** 2025-11-01 13:10:09.811110 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.811122 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:10:09.811133 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:10:09.811144 | orchestrator | 2025-11-01 13:10:09.811155 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-11-01 13:10:09.811166 | orchestrator | Saturday 01 November 2025 13:08:26 +0000 (0:00:00.562) 0:00:14.385 ***** 2025-11-01 13:10:09.811177 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:10:09.811189 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:10:09.811200 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:10:09.811209 | orchestrator | 2025-11-01 13:10:09.811219 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-11-01 13:10:09.811229 | orchestrator | Saturday 01 November 2025 13:08:28 +0000 (0:00:01.797) 0:00:16.183 ***** 2025-11-01 13:10:09.811255 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-11-01 13:10:09.811265 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-11-01 13:10:09.811287 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-11-01 13:10:09.811296 | orchestrator | 2025-11-01 13:10:09.811306 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-11-01 13:10:09.811315 | orchestrator | Saturday 01 November 2025 13:08:30 +0000 (0:00:02.300) 0:00:18.484 ***** 2025-11-01 13:10:09.811325 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-11-01 13:10:09.811335 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-11-01 13:10:09.811344 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-11-01 13:10:09.811354 | orchestrator | 2025-11-01 13:10:09.811364 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-11-01 13:10:09.811379 | orchestrator | Saturday 01 November 2025 13:08:33 +0000 (0:00:02.637) 0:00:21.122 ***** 2025-11-01 13:10:09.811389 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-11-01 13:10:09.811398 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-11-01 13:10:09.811408 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-11-01 13:10:09.811418 | orchestrator | 2025-11-01 13:10:09.811427 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-11-01 13:10:09.811437 | orchestrator | Saturday 01 November 2025 13:08:35 +0000 (0:00:02.444) 0:00:23.566 ***** 2025-11-01 13:10:09.811447 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.811456 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:10:09.811466 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:10:09.811476 | orchestrator | 2025-11-01 13:10:09.811490 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-11-01 13:10:09.811499 | orchestrator | Saturday 01 November 2025 13:08:36 +0000 (0:00:00.382) 0:00:23.949 ***** 2025-11-01 13:10:09.811509 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.811519 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:10:09.811529 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:10:09.811538 | orchestrator | 2025-11-01 13:10:09.811548 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-11-01 13:10:09.811558 | orchestrator | Saturday 01 November 2025 13:08:36 +0000 (0:00:00.338) 0:00:24.287 ***** 2025-11-01 13:10:09.811568 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:10:09.811577 | orchestrator | 2025-11-01 13:10:09.811587 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-11-01 13:10:09.811597 | orchestrator | Saturday 01 November 2025 13:08:37 +0000 (0:00:00.886) 0:00:25.173 ***** 2025-11-01 13:10:09.811608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 13:10:09.811639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 13:10:09.811651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 13:10:09.811668 | orchestrator | 2025-11-01 13:10:09.811677 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-11-01 13:10:09.811687 | orchestrator | Saturday 01 November 2025 13:08:39 +0000 (0:00:01.715) 0:00:26.889 ***** 2025-11-01 13:10:09.811710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-01 13:10:09.811722 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.811737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-01 13:10:09.811755 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:10:09.811771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-01 13:10:09.811788 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:10:09.811797 | orchestrator | 2025-11-01 13:10:09.811807 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-11-01 13:10:09.811816 | orchestrator | Saturday 01 November 2025 13:08:39 +0000 (0:00:00.776) 0:00:27.666 ***** 2025-11-01 13:10:09.811833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-01 13:10:09.811844 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.811859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-01 13:10:09.811879 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:10:09.811902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-11-01 13:10:09.811914 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:10:09.811924 | orchestrator | 2025-11-01 13:10:09.811933 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-11-01 13:10:09.811943 | orchestrator | Saturday 01 November 2025 13:08:41 +0000 (0:00:01.294) 0:00:28.961 ***** 2025-11-01 13:10:09.811953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 13:10:09.811993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 13:10:09.812006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-11-01 13:10:09.812023 | orchestrator | 2025-11-01 13:10:09.812033 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-11-01 13:10:09.812042 | orchestrator | Saturday 01 November 2025 13:08:42 +0000 (0:00:01.724) 0:00:30.685 ***** 2025-11-01 13:10:09.812052 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:10:09.812062 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:10:09.812072 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:10:09.812081 | orchestrator | 2025-11-01 13:10:09.812091 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-11-01 13:10:09.812101 | orchestrator | Saturday 01 November 2025 13:08:43 +0000 (0:00:00.352) 0:00:31.037 ***** 2025-11-01 13:10:09.812110 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:10:09.812120 | orchestrator | 2025-11-01 13:10:09.812130 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-11-01 13:10:09.812144 | orchestrator | Saturday 01 November 2025 13:08:43 +0000 (0:00:00.589) 0:00:31.627 ***** 2025-11-01 13:10:09.812154 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:10:09.812164 | orchestrator | 2025-11-01 13:10:09.812173 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-11-01 13:10:09.812183 | orchestrator | Saturday 01 November 2025 13:08:46 +0000 (0:00:02.860) 0:00:34.487 ***** 2025-11-01 13:10:09.812192 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:10:09.812202 | orchestrator | 2025-11-01 13:10:09.812211 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-11-01 13:10:09.812221 | orchestrator | Saturday 01 November 2025 13:08:49 +0000 (0:00:03.180) 0:00:37.667 ***** 2025-11-01 13:10:09.812230 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:10:09.812255 | orchestrator | 2025-11-01 13:10:09.812265 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-11-01 13:10:09.812274 | orchestrator | Saturday 01 November 2025 13:09:07 +0000 (0:00:18.032) 0:00:55.699 ***** 2025-11-01 13:10:09.812284 | orchestrator | 2025-11-01 13:10:09.812301 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-11-01 13:10:09.812311 | orchestrator | Saturday 01 November 2025 13:09:08 +0000 (0:00:00.108) 0:00:55.808 ***** 2025-11-01 13:10:09.812321 | orchestrator | 2025-11-01 13:10:09.812330 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-11-01 13:10:09.812346 | orchestrator | Saturday 01 November 2025 13:09:08 +0000 (0:00:00.072) 0:00:55.881 ***** 2025-11-01 13:10:09.812356 | orchestrator | 2025-11-01 13:10:09.812365 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-11-01 13:10:09.812375 | orchestrator | Saturday 01 November 2025 13:09:08 +0000 (0:00:00.102) 0:00:55.983 ***** 2025-11-01 13:10:09.812384 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:10:09.812394 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:10:09.812403 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:10:09.812413 | orchestrator | 2025-11-01 13:10:09.812423 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:10:09.812432 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-11-01 13:10:09.812442 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-01 13:10:09.812452 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-11-01 13:10:09.812461 | orchestrator | 2025-11-01 13:10:09.812471 | orchestrator | 2025-11-01 13:10:09.812480 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:10:09.812489 | orchestrator | Saturday 01 November 2025 13:10:06 +0000 (0:00:58.742) 0:01:54.726 ***** 2025-11-01 13:10:09.812499 | orchestrator | =============================================================================== 2025-11-01 13:10:09.812508 | orchestrator | horizon : Restart horizon container ------------------------------------ 58.74s 2025-11-01 13:10:09.812518 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 18.03s 2025-11-01 13:10:09.812527 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 3.18s 2025-11-01 13:10:09.812536 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.86s 2025-11-01 13:10:09.812546 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.64s 2025-11-01 13:10:09.812555 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.44s 2025-11-01 13:10:09.812565 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.30s 2025-11-01 13:10:09.812574 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.80s 2025-11-01 13:10:09.812584 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.72s 2025-11-01 13:10:09.812593 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.72s 2025-11-01 13:10:09.812603 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.45s 2025-11-01 13:10:09.812612 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.29s 2025-11-01 13:10:09.812621 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.89s 2025-11-01 13:10:09.812631 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.89s 2025-11-01 13:10:09.812640 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.78s 2025-11-01 13:10:09.812649 | orchestrator | horizon : Update policy file name --------------------------------------- 0.62s 2025-11-01 13:10:09.812659 | orchestrator | horizon : Update policy file name --------------------------------------- 0.61s 2025-11-01 13:10:09.812668 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.60s 2025-11-01 13:10:09.812678 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.59s 2025-11-01 13:10:09.812687 | orchestrator | horizon : Update policy file name --------------------------------------- 0.57s 2025-11-01 13:10:09.812696 | orchestrator | 2025-11-01 13:10:09 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:10:09.812706 | orchestrator | 2025-11-01 13:10:09 | INFO  | Task 611ffeee-534a-4628-aa5b-2ec4e3c83484 is in state STARTED 2025-11-01 13:10:09.812721 | orchestrator | 2025-11-01 13:10:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:10:12.849609 | orchestrator | 2025-11-01 13:10:12 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:10:12.851431 | orchestrator | 2025-11-01 13:10:12 | INFO  | Task 611ffeee-534a-4628-aa5b-2ec4e3c83484 is in state STARTED 2025-11-01 13:10:12.851467 | orchestrator | 2025-11-01 13:10:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:10:15.887677 | orchestrator | 2025-11-01 13:10:15 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:10:15.888943 | orchestrator | 2025-11-01 13:10:15 | INFO  | Task 611ffeee-534a-4628-aa5b-2ec4e3c83484 is in state STARTED 2025-11-01 13:10:15.888974 | orchestrator | 2025-11-01 13:10:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:10:18.923970 | orchestrator | 2025-11-01 13:10:18 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:10:18.924886 | orchestrator | 2025-11-01 13:10:18 | INFO  | Task 611ffeee-534a-4628-aa5b-2ec4e3c83484 is in state STARTED 2025-11-01 13:10:18.924918 | orchestrator | 2025-11-01 13:10:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:10:21.965008 | orchestrator | 2025-11-01 13:10:21 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:10:21.966545 | orchestrator | 2025-11-01 13:10:21 | INFO  | Task 611ffeee-534a-4628-aa5b-2ec4e3c83484 is in state STARTED 2025-11-01 13:10:21.966576 | orchestrator | 2025-11-01 13:10:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:10:25.003562 | orchestrator | 2025-11-01 13:10:25 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:10:25.004371 | orchestrator | 2025-11-01 13:10:25 | INFO  | Task 611ffeee-534a-4628-aa5b-2ec4e3c83484 is in state STARTED 2025-11-01 13:10:25.004406 | orchestrator | 2025-11-01 13:10:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:10:28.042695 | orchestrator | 2025-11-01 13:10:28 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:10:28.043161 | orchestrator | 2025-11-01 13:10:28 | INFO  | Task 611ffeee-534a-4628-aa5b-2ec4e3c83484 is in state STARTED 2025-11-01 13:10:28.043189 | orchestrator | 2025-11-01 13:10:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:10:31.082466 | orchestrator | 2025-11-01 13:10:31 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:10:31.083679 | orchestrator | 2025-11-01 13:10:31 | INFO  | Task 611ffeee-534a-4628-aa5b-2ec4e3c83484 is in state STARTED 2025-11-01 13:10:31.083702 | orchestrator | 2025-11-01 13:10:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:10:34.118721 | orchestrator | 2025-11-01 13:10:34 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:10:34.119407 | orchestrator | 2025-11-01 13:10:34 | INFO  | Task 611ffeee-534a-4628-aa5b-2ec4e3c83484 is in state STARTED 2025-11-01 13:10:34.119444 | orchestrator | 2025-11-01 13:10:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:10:37.163017 | orchestrator | 2025-11-01 13:10:37 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:10:37.166107 | orchestrator | 2025-11-01 13:10:37 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:10:37.169119 | orchestrator | 2025-11-01 13:10:37 | INFO  | Task 611ffeee-534a-4628-aa5b-2ec4e3c83484 is in state SUCCESS 2025-11-01 13:10:37.170684 | orchestrator | 2025-11-01 13:10:37 | INFO  | Task 28a2865d-4b9f-4f8f-96e1-75cb37d18b23 is in state STARTED 2025-11-01 13:10:37.170959 | orchestrator | 2025-11-01 13:10:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:10:40.220434 | orchestrator | 2025-11-01 13:10:40 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:10:40.221933 | orchestrator | 2025-11-01 13:10:40 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:10:40.222932 | orchestrator | 2025-11-01 13:10:40 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:10:40.223933 | orchestrator | 2025-11-01 13:10:40 | INFO  | Task 28a2865d-4b9f-4f8f-96e1-75cb37d18b23 is in state STARTED 2025-11-01 13:10:40.224052 | orchestrator | 2025-11-01 13:10:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:10:43.261037 | orchestrator | 2025-11-01 13:10:43 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:10:43.261454 | orchestrator | 2025-11-01 13:10:43 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:10:43.263234 | orchestrator | 2025-11-01 13:10:43 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:10:43.264945 | orchestrator | 2025-11-01 13:10:43 | INFO  | Task 28a2865d-4b9f-4f8f-96e1-75cb37d18b23 is in state STARTED 2025-11-01 13:10:43.266782 | orchestrator | 2025-11-01 13:10:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:10:46.307860 | orchestrator | 2025-11-01 13:10:46 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:10:46.308036 | orchestrator | 2025-11-01 13:10:46 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:10:46.308845 | orchestrator | 2025-11-01 13:10:46 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:10:46.309382 | orchestrator | 2025-11-01 13:10:46 | INFO  | Task aad2b43e-ca56-41fc-9a14-6e6be5233fea is in state STARTED 2025-11-01 13:10:46.310359 | orchestrator | 2025-11-01 13:10:46 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:10:46.311138 | orchestrator | 2025-11-01 13:10:46 | INFO  | Task 28a2865d-4b9f-4f8f-96e1-75cb37d18b23 is in state SUCCESS 2025-11-01 13:10:46.311164 | orchestrator | 2025-11-01 13:10:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:10:49.349645 | orchestrator | 2025-11-01 13:10:49 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:10:49.351089 | orchestrator | 2025-11-01 13:10:49 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:10:49.351569 | orchestrator | 2025-11-01 13:10:49 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:10:49.352583 | orchestrator | 2025-11-01 13:10:49 | INFO  | Task aad2b43e-ca56-41fc-9a14-6e6be5233fea is in state STARTED 2025-11-01 13:10:49.354100 | orchestrator | 2025-11-01 13:10:49 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:10:49.354125 | orchestrator | 2025-11-01 13:10:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:10:52.381022 | orchestrator | 2025-11-01 13:10:52 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:10:52.385713 | orchestrator | 2025-11-01 13:10:52 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:10:52.387113 | orchestrator | 2025-11-01 13:10:52 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:10:52.388221 | orchestrator | 2025-11-01 13:10:52 | INFO  | Task aad2b43e-ca56-41fc-9a14-6e6be5233fea is in state STARTED 2025-11-01 13:10:52.389184 | orchestrator | 2025-11-01 13:10:52 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:10:52.389207 | orchestrator | 2025-11-01 13:10:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:10:55.435728 | orchestrator | 2025-11-01 13:10:55 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:10:55.436314 | orchestrator | 2025-11-01 13:10:55 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:10:55.439733 | orchestrator | 2025-11-01 13:10:55 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:10:55.444584 | orchestrator | 2025-11-01 13:10:55 | INFO  | Task aad2b43e-ca56-41fc-9a14-6e6be5233fea is in state STARTED 2025-11-01 13:10:55.447478 | orchestrator | 2025-11-01 13:10:55 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:10:55.447576 | orchestrator | 2025-11-01 13:10:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:10:58.494894 | orchestrator | 2025-11-01 13:10:58 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:10:58.497801 | orchestrator | 2025-11-01 13:10:58 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:10:58.499301 | orchestrator | 2025-11-01 13:10:58 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:10:58.501915 | orchestrator | 2025-11-01 13:10:58 | INFO  | Task aad2b43e-ca56-41fc-9a14-6e6be5233fea is in state STARTED 2025-11-01 13:10:58.503993 | orchestrator | 2025-11-01 13:10:58 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:10:58.504369 | orchestrator | 2025-11-01 13:10:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:11:01.545979 | orchestrator | 2025-11-01 13:11:01 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:11:01.548902 | orchestrator | 2025-11-01 13:11:01 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:11:01.551659 | orchestrator | 2025-11-01 13:11:01 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:11:01.554828 | orchestrator | 2025-11-01 13:11:01 | INFO  | Task aad2b43e-ca56-41fc-9a14-6e6be5233fea is in state STARTED 2025-11-01 13:11:01.556652 | orchestrator | 2025-11-01 13:11:01 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:11:01.556674 | orchestrator | 2025-11-01 13:11:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:11:04.609388 | orchestrator | 2025-11-01 13:11:04 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:11:04.612429 | orchestrator | 2025-11-01 13:11:04 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:11:04.618479 | orchestrator | 2025-11-01 13:11:04 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:11:04.618514 | orchestrator | 2025-11-01 13:11:04 | INFO  | Task aad2b43e-ca56-41fc-9a14-6e6be5233fea is in state STARTED 2025-11-01 13:11:04.618526 | orchestrator | 2025-11-01 13:11:04 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:11:04.618538 | orchestrator | 2025-11-01 13:11:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:11:07.669714 | orchestrator | 2025-11-01 13:11:07 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:11:07.671004 | orchestrator | 2025-11-01 13:11:07 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:11:07.674293 | orchestrator | 2025-11-01 13:11:07 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:11:07.678480 | orchestrator | 2025-11-01 13:11:07 | INFO  | Task aad2b43e-ca56-41fc-9a14-6e6be5233fea is in state STARTED 2025-11-01 13:11:07.679843 | orchestrator | 2025-11-01 13:11:07 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:11:07.681353 | orchestrator | 2025-11-01 13:11:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:11:10.757082 | orchestrator | 2025-11-01 13:11:10 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:11:10.757172 | orchestrator | 2025-11-01 13:11:10 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:11:10.757184 | orchestrator | 2025-11-01 13:11:10 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:11:10.757194 | orchestrator | 2025-11-01 13:11:10 | INFO  | Task aad2b43e-ca56-41fc-9a14-6e6be5233fea is in state STARTED 2025-11-01 13:11:10.757204 | orchestrator | 2025-11-01 13:11:10 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:11:10.757214 | orchestrator | 2025-11-01 13:11:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:11:13.880996 | orchestrator | 2025-11-01 13:11:13 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:11:13.881091 | orchestrator | 2025-11-01 13:11:13 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:11:13.881106 | orchestrator | 2025-11-01 13:11:13 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:11:13.881119 | orchestrator | 2025-11-01 13:11:13 | INFO  | Task aad2b43e-ca56-41fc-9a14-6e6be5233fea is in state STARTED 2025-11-01 13:11:13.881130 | orchestrator | 2025-11-01 13:11:13 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state STARTED 2025-11-01 13:11:13.881142 | orchestrator | 2025-11-01 13:11:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:11:16.857766 | orchestrator | 2025-11-01 13:11:16 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:11:16.858597 | orchestrator | 2025-11-01 13:11:16 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:11:16.859510 | orchestrator | 2025-11-01 13:11:16 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:11:16.861587 | orchestrator | 2025-11-01 13:11:16 | INFO  | Task aad2b43e-ca56-41fc-9a14-6e6be5233fea is in state STARTED 2025-11-01 13:11:16.863846 | orchestrator | 2025-11-01 13:11:16 | INFO  | Task 7ce6c458-b72d-4cf1-a8d0-bee97a04b07d is in state SUCCESS 2025-11-01 13:11:16.864210 | orchestrator | 2025-11-01 13:11:16.864234 | orchestrator | 2025-11-01 13:11:16.864246 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-11-01 13:11:16.864296 | orchestrator | 2025-11-01 13:11:16.864316 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-11-01 13:11:16.864335 | orchestrator | Saturday 01 November 2025 13:09:41 +0000 (0:00:00.274) 0:00:00.274 ***** 2025-11-01 13:11:16.864354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-11-01 13:11:16.864368 | orchestrator | 2025-11-01 13:11:16.864380 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-11-01 13:11:16.864391 | orchestrator | Saturday 01 November 2025 13:09:42 +0000 (0:00:00.237) 0:00:00.511 ***** 2025-11-01 13:11:16.864402 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-11-01 13:11:16.864413 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-11-01 13:11:16.864451 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-11-01 13:11:16.864463 | orchestrator | 2025-11-01 13:11:16.864474 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-11-01 13:11:16.864485 | orchestrator | Saturday 01 November 2025 13:09:43 +0000 (0:00:01.356) 0:00:01.867 ***** 2025-11-01 13:11:16.864510 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-11-01 13:11:16.864522 | orchestrator | 2025-11-01 13:11:16.864532 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-11-01 13:11:16.864543 | orchestrator | Saturday 01 November 2025 13:09:45 +0000 (0:00:01.545) 0:00:03.413 ***** 2025-11-01 13:11:16.864554 | orchestrator | changed: [testbed-manager] 2025-11-01 13:11:16.864749 | orchestrator | 2025-11-01 13:11:16.864764 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-11-01 13:11:16.864775 | orchestrator | Saturday 01 November 2025 13:09:46 +0000 (0:00:00.955) 0:00:04.368 ***** 2025-11-01 13:11:16.864786 | orchestrator | changed: [testbed-manager] 2025-11-01 13:11:16.864797 | orchestrator | 2025-11-01 13:11:16.864807 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-11-01 13:11:16.864818 | orchestrator | Saturday 01 November 2025 13:09:46 +0000 (0:00:00.964) 0:00:05.333 ***** 2025-11-01 13:11:16.864829 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-11-01 13:11:16.864840 | orchestrator | ok: [testbed-manager] 2025-11-01 13:11:16.864851 | orchestrator | 2025-11-01 13:11:16.864861 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-11-01 13:11:16.864872 | orchestrator | Saturday 01 November 2025 13:10:25 +0000 (0:00:38.816) 0:00:44.149 ***** 2025-11-01 13:11:16.864883 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-11-01 13:11:16.864894 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-11-01 13:11:16.864905 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-11-01 13:11:16.864916 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-11-01 13:11:16.864926 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-11-01 13:11:16.864937 | orchestrator | 2025-11-01 13:11:16.864947 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-11-01 13:11:16.864958 | orchestrator | Saturday 01 November 2025 13:10:29 +0000 (0:00:04.154) 0:00:48.304 ***** 2025-11-01 13:11:16.864969 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-11-01 13:11:16.864980 | orchestrator | 2025-11-01 13:11:16.864990 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-11-01 13:11:16.865002 | orchestrator | Saturday 01 November 2025 13:10:30 +0000 (0:00:00.513) 0:00:48.817 ***** 2025-11-01 13:11:16.865012 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:11:16.865077 | orchestrator | 2025-11-01 13:11:16.865150 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-11-01 13:11:16.865161 | orchestrator | Saturday 01 November 2025 13:10:30 +0000 (0:00:00.148) 0:00:48.966 ***** 2025-11-01 13:11:16.865766 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:11:16.865785 | orchestrator | 2025-11-01 13:11:16.865795 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-11-01 13:11:16.865806 | orchestrator | Saturday 01 November 2025 13:10:31 +0000 (0:00:00.581) 0:00:49.548 ***** 2025-11-01 13:11:16.865817 | orchestrator | changed: [testbed-manager] 2025-11-01 13:11:16.865827 | orchestrator | 2025-11-01 13:11:16.865838 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-11-01 13:11:16.865849 | orchestrator | Saturday 01 November 2025 13:10:32 +0000 (0:00:01.518) 0:00:51.066 ***** 2025-11-01 13:11:16.865859 | orchestrator | changed: [testbed-manager] 2025-11-01 13:11:16.865870 | orchestrator | 2025-11-01 13:11:16.865881 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-11-01 13:11:16.865891 | orchestrator | Saturday 01 November 2025 13:10:33 +0000 (0:00:00.844) 0:00:51.911 ***** 2025-11-01 13:11:16.865915 | orchestrator | changed: [testbed-manager] 2025-11-01 13:11:16.865926 | orchestrator | 2025-11-01 13:11:16.865937 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-11-01 13:11:16.865947 | orchestrator | Saturday 01 November 2025 13:10:34 +0000 (0:00:00.648) 0:00:52.559 ***** 2025-11-01 13:11:16.865958 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-11-01 13:11:16.865969 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-11-01 13:11:16.865980 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-11-01 13:11:16.865990 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-11-01 13:11:16.866001 | orchestrator | 2025-11-01 13:11:16.866012 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:11:16.866076 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:11:16.866088 | orchestrator | 2025-11-01 13:11:16.866099 | orchestrator | 2025-11-01 13:11:16.866155 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:11:16.866168 | orchestrator | Saturday 01 November 2025 13:10:35 +0000 (0:00:01.517) 0:00:54.076 ***** 2025-11-01 13:11:16.866180 | orchestrator | =============================================================================== 2025-11-01 13:11:16.866191 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 38.82s 2025-11-01 13:11:16.866201 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.15s 2025-11-01 13:11:16.866212 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.55s 2025-11-01 13:11:16.866223 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.52s 2025-11-01 13:11:16.866233 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.52s 2025-11-01 13:11:16.866244 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.36s 2025-11-01 13:11:16.866287 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.96s 2025-11-01 13:11:16.866308 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.96s 2025-11-01 13:11:16.866319 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.84s 2025-11-01 13:11:16.866330 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.65s 2025-11-01 13:11:16.866349 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.58s 2025-11-01 13:11:16.866360 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.51s 2025-11-01 13:11:16.866371 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2025-11-01 13:11:16.866382 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2025-11-01 13:11:16.866392 | orchestrator | 2025-11-01 13:11:16.866403 | orchestrator | 2025-11-01 13:11:16.866414 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:11:16.866424 | orchestrator | 2025-11-01 13:11:16.866435 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:11:16.866445 | orchestrator | Saturday 01 November 2025 13:10:41 +0000 (0:00:00.212) 0:00:00.212 ***** 2025-11-01 13:11:16.866456 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:11:16.866467 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:11:16.866478 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:11:16.866488 | orchestrator | 2025-11-01 13:11:16.866499 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:11:16.866510 | orchestrator | Saturday 01 November 2025 13:10:41 +0000 (0:00:00.380) 0:00:00.593 ***** 2025-11-01 13:11:16.866520 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-11-01 13:11:16.866531 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-11-01 13:11:16.866542 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-11-01 13:11:16.866553 | orchestrator | 2025-11-01 13:11:16.866573 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-11-01 13:11:16.866584 | orchestrator | 2025-11-01 13:11:16.866594 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-11-01 13:11:16.866605 | orchestrator | Saturday 01 November 2025 13:10:42 +0000 (0:00:00.921) 0:00:01.514 ***** 2025-11-01 13:11:16.866616 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:11:16.866626 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:11:16.866637 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:11:16.866648 | orchestrator | 2025-11-01 13:11:16.866658 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:11:16.866670 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:11:16.866681 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:11:16.866692 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:11:16.866703 | orchestrator | 2025-11-01 13:11:16.866714 | orchestrator | 2025-11-01 13:11:16.866724 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:11:16.866735 | orchestrator | Saturday 01 November 2025 13:10:43 +0000 (0:00:00.711) 0:00:02.226 ***** 2025-11-01 13:11:16.866746 | orchestrator | =============================================================================== 2025-11-01 13:11:16.866756 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.92s 2025-11-01 13:11:16.866767 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.71s 2025-11-01 13:11:16.866778 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2025-11-01 13:11:16.866788 | orchestrator | 2025-11-01 13:11:16.866799 | orchestrator | 2025-11-01 13:11:16.866810 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:11:16.866820 | orchestrator | 2025-11-01 13:11:16.866831 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:11:16.866842 | orchestrator | Saturday 01 November 2025 13:08:12 +0000 (0:00:00.315) 0:00:00.315 ***** 2025-11-01 13:11:16.866853 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:11:16.866863 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:11:16.866874 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:11:16.866885 | orchestrator | 2025-11-01 13:11:16.866895 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:11:16.866906 | orchestrator | Saturday 01 November 2025 13:08:12 +0000 (0:00:00.415) 0:00:00.731 ***** 2025-11-01 13:11:16.866917 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-11-01 13:11:16.866927 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-11-01 13:11:16.866938 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-11-01 13:11:16.866949 | orchestrator | 2025-11-01 13:11:16.866960 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-11-01 13:11:16.866971 | orchestrator | 2025-11-01 13:11:16.867017 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-01 13:11:16.867030 | orchestrator | Saturday 01 November 2025 13:08:13 +0000 (0:00:00.485) 0:00:01.217 ***** 2025-11-01 13:11:16.867041 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:11:16.867052 | orchestrator | 2025-11-01 13:11:16.867062 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-11-01 13:11:16.867073 | orchestrator | Saturday 01 November 2025 13:08:13 +0000 (0:00:00.595) 0:00:01.813 ***** 2025-11-01 13:11:16.867095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:11:16.867120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:11:16.867133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:11:16.867147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 13:11:16.867190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 13:11:16.867216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 13:11:16.867228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 13:11:16.867240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 13:11:16.867274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 13:11:16.867287 | orchestrator | 2025-11-01 13:11:16.867298 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-11-01 13:11:16.867309 | orchestrator | Saturday 01 November 2025 13:08:15 +0000 (0:00:01.959) 0:00:03.772 ***** 2025-11-01 13:11:16.867320 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-11-01 13:11:16.867331 | orchestrator | 2025-11-01 13:11:16.867342 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-11-01 13:11:16.867353 | orchestrator | Saturday 01 November 2025 13:08:16 +0000 (0:00:01.004) 0:00:04.777 ***** 2025-11-01 13:11:16.867364 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:11:16.867374 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:11:16.867385 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:11:16.867396 | orchestrator | 2025-11-01 13:11:16.867406 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-11-01 13:11:16.867417 | orchestrator | Saturday 01 November 2025 13:08:17 +0000 (0:00:00.533) 0:00:05.310 ***** 2025-11-01 13:11:16.867428 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 13:11:16.867438 | orchestrator | 2025-11-01 13:11:16.867449 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-01 13:11:16.867460 | orchestrator | Saturday 01 November 2025 13:08:18 +0000 (0:00:00.784) 0:00:06.094 ***** 2025-11-01 13:11:16.867471 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:11:16.867488 | orchestrator | 2025-11-01 13:11:16.867504 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-11-01 13:11:16.867516 | orchestrator | Saturday 01 November 2025 13:08:18 +0000 (0:00:00.625) 0:00:06.720 ***** 2025-11-01 13:11:16.867533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:11:16.867546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:11:16.867558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:11:16.867571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 13:11:16.867592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 13:11:16.867611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 13:11:16.867627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 13:11:16.867639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 13:11:16.867650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 13:11:16.867661 | orchestrator | 2025-11-01 13:11:16.867672 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-11-01 13:11:16.867683 | orchestrator | Saturday 01 November 2025 13:08:22 +0000 (0:00:03.515) 0:00:10.235 ***** 2025-11-01 13:11:16.867695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-01 13:11:16.867727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 13:11:16.867744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 13:11:16.867756 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:11:16.867768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-01 13:11:16.867780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 13:11:16.867792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 13:11:16.867803 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:11:16.867828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-01 13:11:16.867840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 13:11:16.867870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 13:11:16.867882 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:11:16.867893 | orchestrator | 2025-11-01 13:11:16.867903 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-11-01 13:11:16.867914 | orchestrator | Saturday 01 November 2025 13:08:23 +0000 (0:00:00.927) 0:00:11.163 ***** 2025-11-01 13:11:16.867926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-01 13:11:16.867938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 13:11:16.867955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 13:11:16.867967 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:11:16.867987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-01 13:11:16.868004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 13:11:16.868017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 13:11:16.868028 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:11:16.868039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-11-01 13:11:16.868057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 13:11:16.868075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-11-01 13:11:16.868087 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:11:16.868098 | orchestrator | 2025-11-01 13:11:16.868109 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-11-01 13:11:16.868120 | orchestrator | Saturday 01 November 2025 13:08:24 +0000 (0:00:00.804) 0:00:11.968 ***** 2025-11-01 13:11:16.868136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:11:16.868148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:11:16.868161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:11:16.868185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 13:11:16.868197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 13:11:16.868213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 13:11:16.868225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 13:11:16.868236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 13:11:16.868293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 13:11:16.868306 | orchestrator | 2025-11-01 13:11:16.868317 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-11-01 13:11:16.868328 | orchestrator | Saturday 01 November 2025 13:08:27 +0000 (0:00:03.329) 0:00:15.297 ***** 2025-11-01 13:11:16.868347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:11:16.868360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 13:11:16.868378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:11:16.868390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 13:11:16.868409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:11:16.868422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 13:11:16.868440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 13:11:16.868456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 13:11:16.868468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 13:11:16.868479 | orchestrator | 2025-11-01 13:11:16.868490 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-11-01 13:11:16.868501 | orchestrator | Saturday 01 November 2025 13:08:33 +0000 (0:00:06.090) 0:00:21.388 ***** 2025-11-01 13:11:16.868513 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:11:16.868530 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:11:16.868541 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:11:16.868551 | orchestrator | 2025-11-01 13:11:16.868562 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-11-01 13:11:16.868573 | orchestrator | Saturday 01 November 2025 13:08:35 +0000 (0:00:01.769) 0:00:23.158 ***** 2025-11-01 13:11:16.868584 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:11:16.868595 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:11:16.868605 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:11:16.868616 | orchestrator | 2025-11-01 13:11:16.868627 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-11-01 13:11:16.868637 | orchestrator | Saturday 01 November 2025 13:08:36 +0000 (0:00:00.717) 0:00:23.875 ***** 2025-11-01 13:11:16.868648 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:11:16.868659 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:11:16.868669 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:11:16.868680 | orchestrator | 2025-11-01 13:11:16.868691 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-11-01 13:11:16.868702 | orchestrator | Saturday 01 November 2025 13:08:36 +0000 (0:00:00.344) 0:00:24.220 ***** 2025-11-01 13:11:16.868712 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:11:16.868723 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:11:16.868734 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:11:16.868744 | orchestrator | 2025-11-01 13:11:16.868755 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-11-01 13:11:16.868766 | orchestrator | Saturday 01 November 2025 13:08:36 +0000 (0:00:00.594) 0:00:24.814 ***** 2025-11-01 13:11:16.868778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:11:16.868797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:11:16.868819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 13:11:16.868837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 13:11:16.868849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:11:16.868862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-11-01 13:11:16.868881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 13:11:16.868893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 13:11:16.868909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 13:11:16.868927 | orchestrator | 2025-11-01 13:11:16.868938 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-01 13:11:16.868949 | orchestrator | Saturday 01 November 2025 13:08:39 +0000 (0:00:02.618) 0:00:27.432 ***** 2025-11-01 13:11:16.868960 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:11:16.868971 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:11:16.868981 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:11:16.868992 | orchestrator | 2025-11-01 13:11:16.869003 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-11-01 13:11:16.869014 | orchestrator | Saturday 01 November 2025 13:08:39 +0000 (0:00:00.321) 0:00:27.754 ***** 2025-11-01 13:11:16.869025 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-11-01 13:11:16.869036 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-11-01 13:11:16.869046 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-11-01 13:11:16.869057 | orchestrator | 2025-11-01 13:11:16.869068 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-11-01 13:11:16.869079 | orchestrator | Saturday 01 November 2025 13:08:41 +0000 (0:00:01.992) 0:00:29.747 ***** 2025-11-01 13:11:16.869090 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 13:11:16.869100 | orchestrator | 2025-11-01 13:11:16.869111 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-11-01 13:11:16.869122 | orchestrator | Saturday 01 November 2025 13:08:42 +0000 (0:00:01.046) 0:00:30.793 ***** 2025-11-01 13:11:16.869133 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:11:16.869143 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:11:16.869154 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:11:16.869165 | orchestrator | 2025-11-01 13:11:16.869175 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-11-01 13:11:16.869186 | orchestrator | Saturday 01 November 2025 13:08:43 +0000 (0:00:01.002) 0:00:31.795 ***** 2025-11-01 13:11:16.869197 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-11-01 13:11:16.869208 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 13:11:16.869218 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-11-01 13:11:16.869229 | orchestrator | 2025-11-01 13:11:16.869240 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-11-01 13:11:16.869274 | orchestrator | Saturday 01 November 2025 13:08:45 +0000 (0:00:01.144) 0:00:32.940 ***** 2025-11-01 13:11:16.869293 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:11:16.869311 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:11:16.869331 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:11:16.869349 | orchestrator | 2025-11-01 13:11:16.869365 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-11-01 13:11:16.869376 | orchestrator | Saturday 01 November 2025 13:08:45 +0000 (0:00:00.347) 0:00:33.288 ***** 2025-11-01 13:11:16.869387 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-11-01 13:11:16.869397 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-11-01 13:11:16.869408 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-11-01 13:11:16.869419 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-11-01 13:11:16.869430 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-11-01 13:11:16.869455 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-11-01 13:11:16.869466 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-11-01 13:11:16.869477 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-11-01 13:11:16.869488 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-11-01 13:11:16.869499 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-11-01 13:11:16.869509 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-11-01 13:11:16.869520 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-11-01 13:11:16.869530 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-11-01 13:11:16.869541 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-11-01 13:11:16.869552 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-11-01 13:11:16.869568 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-01 13:11:16.869579 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-01 13:11:16.869590 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-01 13:11:16.869600 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-01 13:11:16.869611 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-01 13:11:16.869622 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-01 13:11:16.869632 | orchestrator | 2025-11-01 13:11:16.869643 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-11-01 13:11:16.869654 | orchestrator | Saturday 01 November 2025 13:08:55 +0000 (0:00:09.800) 0:00:43.088 ***** 2025-11-01 13:11:16.869664 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-01 13:11:16.869675 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-01 13:11:16.869686 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-01 13:11:16.869696 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-01 13:11:16.869707 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-01 13:11:16.869717 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-01 13:11:16.869728 | orchestrator | 2025-11-01 13:11:16.869738 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-11-01 13:11:16.869749 | orchestrator | Saturday 01 November 2025 13:08:58 +0000 (0:00:02.930) 0:00:46.019 ***** 2025-11-01 13:11:16.869761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:11:16.869789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:11:16.869808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-11-01 13:11:16.869820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 13:11:16.869832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 13:11:16.869843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-11-01 13:11:16.869861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 13:11:16.869879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 13:11:16.869891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-11-01 13:11:16.869902 | orchestrator | 2025-11-01 13:11:16.869918 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-01 13:11:16.869929 | orchestrator | Saturday 01 November 2025 13:09:00 +0000 (0:00:02.457) 0:00:48.476 ***** 2025-11-01 13:11:16.869940 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:11:16.869951 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:11:16.869961 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:11:16.869972 | orchestrator | 2025-11-01 13:11:16.869983 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-11-01 13:11:16.869993 | orchestrator | Saturday 01 November 2025 13:09:00 +0000 (0:00:00.315) 0:00:48.792 ***** 2025-11-01 13:11:16.870004 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:11:16.870015 | orchestrator | 2025-11-01 13:11:16.870058 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-11-01 13:11:16.870069 | orchestrator | Saturday 01 November 2025 13:09:03 +0000 (0:00:02.498) 0:00:51.291 ***** 2025-11-01 13:11:16.870079 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:11:16.870090 | orchestrator | 2025-11-01 13:11:16.870101 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-11-01 13:11:16.870112 | orchestrator | Saturday 01 November 2025 13:09:06 +0000 (0:00:02.593) 0:00:53.884 ***** 2025-11-01 13:11:16.870123 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:11:16.870133 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:11:16.870144 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:11:16.870155 | orchestrator | 2025-11-01 13:11:16.870166 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-11-01 13:11:16.870176 | orchestrator | Saturday 01 November 2025 13:09:07 +0000 (0:00:01.106) 0:00:54.990 ***** 2025-11-01 13:11:16.870187 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:11:16.870198 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:11:16.870209 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:11:16.870226 | orchestrator | 2025-11-01 13:11:16.870237 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-11-01 13:11:16.870378 | orchestrator | Saturday 01 November 2025 13:09:07 +0000 (0:00:00.315) 0:00:55.306 ***** 2025-11-01 13:11:16.870424 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:11:16.870435 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:11:16.870445 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:11:16.870454 | orchestrator | 2025-11-01 13:11:16.870464 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-11-01 13:11:16.870474 | orchestrator | Saturday 01 November 2025 13:09:07 +0000 (0:00:00.352) 0:00:55.659 ***** 2025-11-01 13:11:16.870483 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:11:16.870493 | orchestrator | 2025-11-01 13:11:16.870503 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-11-01 13:11:16.870512 | orchestrator | Saturday 01 November 2025 13:09:24 +0000 (0:00:16.538) 0:01:12.197 ***** 2025-11-01 13:11:16.870522 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:11:16.870531 | orchestrator | 2025-11-01 13:11:16.870541 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-11-01 13:11:16.870550 | orchestrator | Saturday 01 November 2025 13:09:35 +0000 (0:00:11.323) 0:01:23.521 ***** 2025-11-01 13:11:16.870560 | orchestrator | 2025-11-01 13:11:16.870570 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-11-01 13:11:16.870579 | orchestrator | Saturday 01 November 2025 13:09:35 +0000 (0:00:00.071) 0:01:23.592 ***** 2025-11-01 13:11:16.870588 | orchestrator | 2025-11-01 13:11:16.870598 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-11-01 13:11:16.870607 | orchestrator | Saturday 01 November 2025 13:09:35 +0000 (0:00:00.068) 0:01:23.661 ***** 2025-11-01 13:11:16.870617 | orchestrator | 2025-11-01 13:11:16.870627 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-11-01 13:11:16.870636 | orchestrator | Saturday 01 November 2025 13:09:35 +0000 (0:00:00.079) 0:01:23.740 ***** 2025-11-01 13:11:16.870645 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:11:16.870655 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:11:16.870665 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:11:16.870674 | orchestrator | 2025-11-01 13:11:16.870684 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-11-01 13:11:16.870694 | orchestrator | Saturday 01 November 2025 13:09:53 +0000 (0:00:17.880) 0:01:41.621 ***** 2025-11-01 13:11:16.870703 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:11:16.870713 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:11:16.870722 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:11:16.870732 | orchestrator | 2025-11-01 13:11:16.870741 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-11-01 13:11:16.870751 | orchestrator | Saturday 01 November 2025 13:10:04 +0000 (0:00:10.329) 0:01:51.951 ***** 2025-11-01 13:11:16.870761 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:11:16.870770 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:11:16.870792 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:11:16.870802 | orchestrator | 2025-11-01 13:11:16.870811 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-01 13:11:16.870821 | orchestrator | Saturday 01 November 2025 13:10:16 +0000 (0:00:12.803) 0:02:04.755 ***** 2025-11-01 13:11:16.870830 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:11:16.870840 | orchestrator | 2025-11-01 13:11:16.870849 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-11-01 13:11:16.870859 | orchestrator | Saturday 01 November 2025 13:10:17 +0000 (0:00:00.762) 0:02:05.517 ***** 2025-11-01 13:11:16.870868 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:11:16.870878 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:11:16.870887 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:11:16.870895 | orchestrator | 2025-11-01 13:11:16.870903 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-11-01 13:11:16.870919 | orchestrator | Saturday 01 November 2025 13:10:18 +0000 (0:00:00.765) 0:02:06.283 ***** 2025-11-01 13:11:16.870927 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:11:16.870935 | orchestrator | 2025-11-01 13:11:16.870942 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-11-01 13:11:16.870950 | orchestrator | Saturday 01 November 2025 13:10:20 +0000 (0:00:01.859) 0:02:08.143 ***** 2025-11-01 13:11:16.870958 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-11-01 13:11:16.870966 | orchestrator | 2025-11-01 13:11:16.870979 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-11-01 13:11:16.870987 | orchestrator | Saturday 01 November 2025 13:10:33 +0000 (0:00:13.277) 0:02:21.420 ***** 2025-11-01 13:11:16.870995 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-11-01 13:11:16.871003 | orchestrator | 2025-11-01 13:11:16.871011 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-11-01 13:11:16.871018 | orchestrator | Saturday 01 November 2025 13:11:00 +0000 (0:00:26.681) 0:02:48.102 ***** 2025-11-01 13:11:16.871026 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-11-01 13:11:16.871034 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-11-01 13:11:16.871042 | orchestrator | 2025-11-01 13:11:16.871050 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-11-01 13:11:16.871058 | orchestrator | Saturday 01 November 2025 13:11:07 +0000 (0:00:07.330) 0:02:55.433 ***** 2025-11-01 13:11:16.871066 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:11:16.871073 | orchestrator | 2025-11-01 13:11:16.871081 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-11-01 13:11:16.871089 | orchestrator | Saturday 01 November 2025 13:11:07 +0000 (0:00:00.194) 0:02:55.627 ***** 2025-11-01 13:11:16.871097 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:11:16.871104 | orchestrator | 2025-11-01 13:11:16.871112 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-11-01 13:11:16.871120 | orchestrator | Saturday 01 November 2025 13:11:07 +0000 (0:00:00.113) 0:02:55.741 ***** 2025-11-01 13:11:16.871128 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:11:16.871136 | orchestrator | 2025-11-01 13:11:16.871144 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-11-01 13:11:16.871151 | orchestrator | Saturday 01 November 2025 13:11:08 +0000 (0:00:00.144) 0:02:55.886 ***** 2025-11-01 13:11:16.871159 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:11:16.871167 | orchestrator | 2025-11-01 13:11:16.871175 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-11-01 13:11:16.871182 | orchestrator | Saturday 01 November 2025 13:11:08 +0000 (0:00:00.687) 0:02:56.573 ***** 2025-11-01 13:11:16.871190 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:11:16.871198 | orchestrator | 2025-11-01 13:11:16.871206 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-11-01 13:11:16.871213 | orchestrator | Saturday 01 November 2025 13:11:12 +0000 (0:00:03.714) 0:03:00.288 ***** 2025-11-01 13:11:16.871221 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:11:16.871229 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:11:16.871236 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:11:16.871244 | orchestrator | 2025-11-01 13:11:16.871269 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:11:16.871278 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-11-01 13:11:16.871286 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-11-01 13:11:16.871294 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-11-01 13:11:16.871307 | orchestrator | 2025-11-01 13:11:16.871315 | orchestrator | 2025-11-01 13:11:16.871323 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:11:16.871331 | orchestrator | Saturday 01 November 2025 13:11:14 +0000 (0:00:02.527) 0:03:02.815 ***** 2025-11-01 13:11:16.871339 | orchestrator | =============================================================================== 2025-11-01 13:11:16.871346 | orchestrator | service-ks-register : keystone | Creating services --------------------- 26.68s 2025-11-01 13:11:16.871354 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 17.88s 2025-11-01 13:11:16.871362 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 16.54s 2025-11-01 13:11:16.871369 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.28s 2025-11-01 13:11:16.871377 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.80s 2025-11-01 13:11:16.871389 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.32s 2025-11-01 13:11:16.871397 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.33s 2025-11-01 13:11:16.871405 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.80s 2025-11-01 13:11:16.871413 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.33s 2025-11-01 13:11:16.871420 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.09s 2025-11-01 13:11:16.871428 | orchestrator | keystone : Creating default user role ----------------------------------- 3.71s 2025-11-01 13:11:16.871436 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.52s 2025-11-01 13:11:16.871444 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.33s 2025-11-01 13:11:16.871452 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.93s 2025-11-01 13:11:16.871459 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.62s 2025-11-01 13:11:16.871467 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.59s 2025-11-01 13:11:16.871475 | orchestrator | keystone : include_tasks ------------------------------------------------ 2.53s 2025-11-01 13:11:16.871486 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.50s 2025-11-01 13:11:16.871494 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.46s 2025-11-01 13:11:16.871502 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.99s 2025-11-01 13:11:16.871510 | orchestrator | 2025-11-01 13:11:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:11:19.909758 | orchestrator | 2025-11-01 13:11:19 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:11:19.912382 | orchestrator | 2025-11-01 13:11:19 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:11:19.913508 | orchestrator | 2025-11-01 13:11:19 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:11:19.914866 | orchestrator | 2025-11-01 13:11:19 | INFO  | Task aad2b43e-ca56-41fc-9a14-6e6be5233fea is in state STARTED 2025-11-01 13:11:19.915719 | orchestrator | 2025-11-01 13:11:19 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:11:19.915740 | orchestrator | 2025-11-01 13:11:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:11:22.954433 | orchestrator | 2025-11-01 13:11:22 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:11:22.955043 | orchestrator | 2025-11-01 13:11:22 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:11:22.955786 | orchestrator | 2025-11-01 13:11:22 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:11:22.956469 | orchestrator | 2025-11-01 13:11:22 | INFO  | Task aad2b43e-ca56-41fc-9a14-6e6be5233fea is in state STARTED 2025-11-01 13:11:22.957205 | orchestrator | 2025-11-01 13:11:22 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:11:22.957229 | orchestrator | 2025-11-01 13:11:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:11:26.002684 | orchestrator | 2025-11-01 13:11:26 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:11:26.003398 | orchestrator | 2025-11-01 13:11:26 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:11:26.004408 | orchestrator | 2025-11-01 13:11:26 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:11:26.005671 | orchestrator | 2025-11-01 13:11:26 | INFO  | Task aad2b43e-ca56-41fc-9a14-6e6be5233fea is in state STARTED 2025-11-01 13:11:26.005786 | orchestrator | 2025-11-01 13:11:26 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:11:26.005803 | orchestrator | 2025-11-01 13:11:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:11:29.069045 | orchestrator | 2025-11-01 13:11:29 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:11:29.069129 | orchestrator | 2025-11-01 13:11:29 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:11:29.069141 | orchestrator | 2025-11-01 13:11:29 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:11:29.072674 | orchestrator | 2025-11-01 13:11:29 | INFO  | Task aad2b43e-ca56-41fc-9a14-6e6be5233fea is in state STARTED 2025-11-01 13:11:29.072697 | orchestrator | 2025-11-01 13:11:29 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:11:29.072709 | orchestrator | 2025-11-01 13:11:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:11:32.127006 | orchestrator | 2025-11-01 13:11:32 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:11:32.127091 | orchestrator | 2025-11-01 13:11:32 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:11:32.127105 | orchestrator | 2025-11-01 13:11:32 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:11:32.127116 | orchestrator | 2025-11-01 13:11:32 | INFO  | Task aad2b43e-ca56-41fc-9a14-6e6be5233fea is in state STARTED 2025-11-01 13:11:32.127127 | orchestrator | 2025-11-01 13:11:32 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:11:32.127138 | orchestrator | 2025-11-01 13:11:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:11:35.150185 | orchestrator | 2025-11-01 13:11:35 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:11:35.151099 | orchestrator | 2025-11-01 13:11:35 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:11:35.153759 | orchestrator | 2025-11-01 13:11:35 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:11:35.154185 | orchestrator | 2025-11-01 13:11:35 | INFO  | Task aad2b43e-ca56-41fc-9a14-6e6be5233fea is in state SUCCESS 2025-11-01 13:11:35.155601 | orchestrator | 2025-11-01 13:11:35 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:11:35.155622 | orchestrator | 2025-11-01 13:11:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:11:38.186663 | orchestrator | 2025-11-01 13:11:38 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:11:38.187899 | orchestrator | 2025-11-01 13:11:38 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:11:38.189056 | orchestrator | 2025-11-01 13:11:38 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:11:38.190161 | orchestrator | 2025-11-01 13:11:38 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:11:38.191378 | orchestrator | 2025-11-01 13:11:38 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:11:38.191403 | orchestrator | 2025-11-01 13:11:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:11:41.235686 | orchestrator | 2025-11-01 13:11:41 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:11:41.235764 | orchestrator | 2025-11-01 13:11:41 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:11:41.235776 | orchestrator | 2025-11-01 13:11:41 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:11:41.235786 | orchestrator | 2025-11-01 13:11:41 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:11:41.235796 | orchestrator | 2025-11-01 13:11:41 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:11:41.235806 | orchestrator | 2025-11-01 13:11:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:11:44.274338 | orchestrator | 2025-11-01 13:11:44 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:11:44.276404 | orchestrator | 2025-11-01 13:11:44 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:11:44.277745 | orchestrator | 2025-11-01 13:11:44 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:11:44.279885 | orchestrator | 2025-11-01 13:11:44 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:11:44.281037 | orchestrator | 2025-11-01 13:11:44 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:11:44.281059 | orchestrator | 2025-11-01 13:11:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:11:47.327711 | orchestrator | 2025-11-01 13:11:47 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:11:47.327779 | orchestrator | 2025-11-01 13:11:47 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:11:47.328251 | orchestrator | 2025-11-01 13:11:47 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:11:47.331574 | orchestrator | 2025-11-01 13:11:47 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:11:47.332558 | orchestrator | 2025-11-01 13:11:47 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:11:47.332567 | orchestrator | 2025-11-01 13:11:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:11:50.385058 | orchestrator | 2025-11-01 13:11:50 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:11:50.387150 | orchestrator | 2025-11-01 13:11:50 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:11:50.390107 | orchestrator | 2025-11-01 13:11:50 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:11:50.391577 | orchestrator | 2025-11-01 13:11:50 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:11:50.394290 | orchestrator | 2025-11-01 13:11:50 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:11:50.394316 | orchestrator | 2025-11-01 13:11:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:11:53.430913 | orchestrator | 2025-11-01 13:11:53 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:11:53.431090 | orchestrator | 2025-11-01 13:11:53 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:11:53.432072 | orchestrator | 2025-11-01 13:11:53 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:11:53.432988 | orchestrator | 2025-11-01 13:11:53 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:11:53.434430 | orchestrator | 2025-11-01 13:11:53 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:11:53.434453 | orchestrator | 2025-11-01 13:11:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:11:56.500911 | orchestrator | 2025-11-01 13:11:56 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:11:56.501873 | orchestrator | 2025-11-01 13:11:56 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:11:56.502937 | orchestrator | 2025-11-01 13:11:56 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:11:56.505565 | orchestrator | 2025-11-01 13:11:56 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:11:56.506907 | orchestrator | 2025-11-01 13:11:56 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:11:56.506935 | orchestrator | 2025-11-01 13:11:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:11:59.555763 | orchestrator | 2025-11-01 13:11:59 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:11:59.557838 | orchestrator | 2025-11-01 13:11:59 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:11:59.560406 | orchestrator | 2025-11-01 13:11:59 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:11:59.561688 | orchestrator | 2025-11-01 13:11:59 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:11:59.562961 | orchestrator | 2025-11-01 13:11:59 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:11:59.563286 | orchestrator | 2025-11-01 13:11:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:12:02.609803 | orchestrator | 2025-11-01 13:12:02 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:12:02.612238 | orchestrator | 2025-11-01 13:12:02 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:12:02.613276 | orchestrator | 2025-11-01 13:12:02 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:12:02.614382 | orchestrator | 2025-11-01 13:12:02 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:12:02.618814 | orchestrator | 2025-11-01 13:12:02 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:12:02.618840 | orchestrator | 2025-11-01 13:12:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:12:05.665393 | orchestrator | 2025-11-01 13:12:05 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:12:05.667512 | orchestrator | 2025-11-01 13:12:05 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:12:05.669669 | orchestrator | 2025-11-01 13:12:05 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:12:05.670776 | orchestrator | 2025-11-01 13:12:05 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:12:05.671810 | orchestrator | 2025-11-01 13:12:05 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:12:05.672367 | orchestrator | 2025-11-01 13:12:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:12:08.711428 | orchestrator | 2025-11-01 13:12:08 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:12:08.714213 | orchestrator | 2025-11-01 13:12:08 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:12:08.716907 | orchestrator | 2025-11-01 13:12:08 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state STARTED 2025-11-01 13:12:08.718917 | orchestrator | 2025-11-01 13:12:08 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:12:08.721019 | orchestrator | 2025-11-01 13:12:08 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:12:08.721382 | orchestrator | 2025-11-01 13:12:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:12:11.758659 | orchestrator | 2025-11-01 13:12:11 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:12:11.759481 | orchestrator | 2025-11-01 13:12:11 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:12:11.760501 | orchestrator | 2025-11-01 13:12:11 | INFO  | Task ac26e71f-b6cb-4976-9427-bdaded4975e7 is in state SUCCESS 2025-11-01 13:12:11.761017 | orchestrator | 2025-11-01 13:12:11.761037 | orchestrator | 2025-11-01 13:12:11.761048 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:12:11.761058 | orchestrator | 2025-11-01 13:12:11.761068 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:12:11.761078 | orchestrator | Saturday 01 November 2025 13:10:49 +0000 (0:00:00.302) 0:00:00.302 ***** 2025-11-01 13:12:11.761089 | orchestrator | ok: [testbed-manager] 2025-11-01 13:12:11.761100 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:12:11.761110 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:12:11.761119 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:12:11.761129 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:12:11.761138 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:12:11.761148 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:12:11.761157 | orchestrator | 2025-11-01 13:12:11.761167 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:12:11.761177 | orchestrator | Saturday 01 November 2025 13:10:50 +0000 (0:00:01.264) 0:00:01.567 ***** 2025-11-01 13:12:11.761187 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-11-01 13:12:11.761197 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-11-01 13:12:11.761207 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-11-01 13:12:11.761216 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-11-01 13:12:11.761226 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-11-01 13:12:11.761235 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-11-01 13:12:11.761245 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-11-01 13:12:11.761254 | orchestrator | 2025-11-01 13:12:11.761308 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-11-01 13:12:11.761319 | orchestrator | 2025-11-01 13:12:11.761328 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-11-01 13:12:11.761338 | orchestrator | Saturday 01 November 2025 13:10:52 +0000 (0:00:01.354) 0:00:02.921 ***** 2025-11-01 13:12:11.761348 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:12:11.761360 | orchestrator | 2025-11-01 13:12:11.761369 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-11-01 13:12:11.761402 | orchestrator | Saturday 01 November 2025 13:10:54 +0000 (0:00:02.157) 0:00:05.079 ***** 2025-11-01 13:12:11.761412 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-11-01 13:12:11.761422 | orchestrator | 2025-11-01 13:12:11.761431 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-11-01 13:12:11.761440 | orchestrator | Saturday 01 November 2025 13:10:59 +0000 (0:00:04.647) 0:00:09.727 ***** 2025-11-01 13:12:11.761451 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-11-01 13:12:11.761462 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-11-01 13:12:11.761472 | orchestrator | 2025-11-01 13:12:11.761481 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-11-01 13:12:11.761490 | orchestrator | Saturday 01 November 2025 13:11:07 +0000 (0:00:08.479) 0:00:18.206 ***** 2025-11-01 13:12:11.761500 | orchestrator | ok: [testbed-manager] => (item=service) 2025-11-01 13:12:11.761509 | orchestrator | 2025-11-01 13:12:11.761519 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-11-01 13:12:11.761528 | orchestrator | Saturday 01 November 2025 13:11:12 +0000 (0:00:04.455) 0:00:22.662 ***** 2025-11-01 13:12:11.761537 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-01 13:12:11.761547 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-11-01 13:12:11.761556 | orchestrator | 2025-11-01 13:12:11.761566 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-11-01 13:12:11.761575 | orchestrator | Saturday 01 November 2025 13:11:17 +0000 (0:00:05.670) 0:00:28.333 ***** 2025-11-01 13:12:11.761585 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-11-01 13:12:11.761595 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-11-01 13:12:11.761604 | orchestrator | 2025-11-01 13:12:11.761613 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-11-01 13:12:11.761623 | orchestrator | Saturday 01 November 2025 13:11:26 +0000 (0:00:08.429) 0:00:36.763 ***** 2025-11-01 13:12:11.761632 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-11-01 13:12:11.761642 | orchestrator | 2025-11-01 13:12:11.761653 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:12:11.761664 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:12:11.761675 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:12:11.761685 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:12:11.761696 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:12:11.761714 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:12:11.761735 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:12:11.761746 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:12:11.761757 | orchestrator | 2025-11-01 13:12:11.761767 | orchestrator | 2025-11-01 13:12:11.761778 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:12:11.761789 | orchestrator | Saturday 01 November 2025 13:11:33 +0000 (0:00:07.656) 0:00:44.420 ***** 2025-11-01 13:12:11.761800 | orchestrator | =============================================================================== 2025-11-01 13:12:11.761830 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 8.48s 2025-11-01 13:12:11.761840 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 8.44s 2025-11-01 13:12:11.761852 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 7.65s 2025-11-01 13:12:11.761862 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 5.67s 2025-11-01 13:12:11.761872 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.65s 2025-11-01 13:12:11.761883 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 4.46s 2025-11-01 13:12:11.761894 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.16s 2025-11-01 13:12:11.761904 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.35s 2025-11-01 13:12:11.761914 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.26s 2025-11-01 13:12:11.761925 | orchestrator | 2025-11-01 13:12:11.761935 | orchestrator | 2025-11-01 13:12:11.762060 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-11-01 13:12:11.762076 | orchestrator | 2025-11-01 13:12:11.762086 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-11-01 13:12:11.762095 | orchestrator | Saturday 01 November 2025 13:10:41 +0000 (0:00:00.343) 0:00:00.343 ***** 2025-11-01 13:12:11.762127 | orchestrator | changed: [testbed-manager] 2025-11-01 13:12:11.762137 | orchestrator | 2025-11-01 13:12:11.762146 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-11-01 13:12:11.762156 | orchestrator | Saturday 01 November 2025 13:10:43 +0000 (0:00:02.280) 0:00:02.624 ***** 2025-11-01 13:12:11.762165 | orchestrator | changed: [testbed-manager] 2025-11-01 13:12:11.762174 | orchestrator | 2025-11-01 13:12:11.762184 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-11-01 13:12:11.762193 | orchestrator | Saturday 01 November 2025 13:10:44 +0000 (0:00:01.125) 0:00:03.749 ***** 2025-11-01 13:12:11.762202 | orchestrator | changed: [testbed-manager] 2025-11-01 13:12:11.762212 | orchestrator | 2025-11-01 13:12:11.762221 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-11-01 13:12:11.762231 | orchestrator | Saturday 01 November 2025 13:10:45 +0000 (0:00:01.121) 0:00:04.871 ***** 2025-11-01 13:12:11.762241 | orchestrator | changed: [testbed-manager] 2025-11-01 13:12:11.762250 | orchestrator | 2025-11-01 13:12:11.762280 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-11-01 13:12:11.762290 | orchestrator | Saturday 01 November 2025 13:10:47 +0000 (0:00:01.707) 0:00:06.579 ***** 2025-11-01 13:12:11.762299 | orchestrator | changed: [testbed-manager] 2025-11-01 13:12:11.762309 | orchestrator | 2025-11-01 13:12:11.762318 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-11-01 13:12:11.762328 | orchestrator | Saturday 01 November 2025 13:10:48 +0000 (0:00:01.075) 0:00:07.654 ***** 2025-11-01 13:12:11.762337 | orchestrator | changed: [testbed-manager] 2025-11-01 13:12:11.762346 | orchestrator | 2025-11-01 13:12:11.762356 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-11-01 13:12:11.762365 | orchestrator | Saturday 01 November 2025 13:10:49 +0000 (0:00:01.015) 0:00:08.670 ***** 2025-11-01 13:12:11.762375 | orchestrator | changed: [testbed-manager] 2025-11-01 13:12:11.762384 | orchestrator | 2025-11-01 13:12:11.762394 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-11-01 13:12:11.762403 | orchestrator | Saturday 01 November 2025 13:10:51 +0000 (0:00:02.513) 0:00:11.183 ***** 2025-11-01 13:12:11.762412 | orchestrator | changed: [testbed-manager] 2025-11-01 13:12:11.762422 | orchestrator | 2025-11-01 13:12:11.762431 | orchestrator | TASK [Create admin user] ******************************************************* 2025-11-01 13:12:11.762440 | orchestrator | Saturday 01 November 2025 13:10:53 +0000 (0:00:01.590) 0:00:12.774 ***** 2025-11-01 13:12:11.762450 | orchestrator | changed: [testbed-manager] 2025-11-01 13:12:11.762459 | orchestrator | 2025-11-01 13:12:11.762469 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-11-01 13:12:11.762487 | orchestrator | Saturday 01 November 2025 13:11:46 +0000 (0:00:52.762) 0:01:05.536 ***** 2025-11-01 13:12:11.762496 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:12:11.762506 | orchestrator | 2025-11-01 13:12:11.762515 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-11-01 13:12:11.762525 | orchestrator | 2025-11-01 13:12:11.762534 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-11-01 13:12:11.762543 | orchestrator | Saturday 01 November 2025 13:11:46 +0000 (0:00:00.197) 0:01:05.733 ***** 2025-11-01 13:12:11.762553 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:12:11.762562 | orchestrator | 2025-11-01 13:12:11.762571 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-11-01 13:12:11.762581 | orchestrator | 2025-11-01 13:12:11.762590 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-11-01 13:12:11.762600 | orchestrator | Saturday 01 November 2025 13:11:58 +0000 (0:00:11.780) 0:01:17.514 ***** 2025-11-01 13:12:11.762609 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:12:11.762618 | orchestrator | 2025-11-01 13:12:11.762628 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-11-01 13:12:11.762637 | orchestrator | 2025-11-01 13:12:11.762647 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-11-01 13:12:11.762658 | orchestrator | Saturday 01 November 2025 13:12:09 +0000 (0:00:11.361) 0:01:28.876 ***** 2025-11-01 13:12:11.762670 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:12:11.762680 | orchestrator | 2025-11-01 13:12:11.762699 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:12:11.762710 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-11-01 13:12:11.762721 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:12:11.762732 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:12:11.762742 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:12:11.762753 | orchestrator | 2025-11-01 13:12:11.762763 | orchestrator | 2025-11-01 13:12:11.762774 | orchestrator | 2025-11-01 13:12:11.762784 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:12:11.762795 | orchestrator | Saturday 01 November 2025 13:12:10 +0000 (0:00:01.344) 0:01:30.221 ***** 2025-11-01 13:12:11.762805 | orchestrator | =============================================================================== 2025-11-01 13:12:11.762816 | orchestrator | Create admin user ------------------------------------------------------ 52.76s 2025-11-01 13:12:11.762826 | orchestrator | Restart ceph manager service ------------------------------------------- 24.49s 2025-11-01 13:12:11.762837 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.51s 2025-11-01 13:12:11.762847 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.28s 2025-11-01 13:12:11.762858 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.71s 2025-11-01 13:12:11.762869 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.59s 2025-11-01 13:12:11.762880 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.13s 2025-11-01 13:12:11.762890 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.12s 2025-11-01 13:12:11.762901 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.08s 2025-11-01 13:12:11.762911 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.02s 2025-11-01 13:12:11.762922 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.20s 2025-11-01 13:12:11.762933 | orchestrator | 2025-11-01 13:12:11 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:12:11.763389 | orchestrator | 2025-11-01 13:12:11 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:12:11.763466 | orchestrator | 2025-11-01 13:12:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:12:14.828317 | orchestrator | 2025-11-01 13:12:14 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:12:14.828390 | orchestrator | 2025-11-01 13:12:14 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:12:14.828401 | orchestrator | 2025-11-01 13:12:14 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:12:14.828412 | orchestrator | 2025-11-01 13:12:14 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:12:14.828421 | orchestrator | 2025-11-01 13:12:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:12:17.854609 | orchestrator | 2025-11-01 13:12:17 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:12:17.855833 | orchestrator | 2025-11-01 13:12:17 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:12:17.857666 | orchestrator | 2025-11-01 13:12:17 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:12:17.859044 | orchestrator | 2025-11-01 13:12:17 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:12:17.859068 | orchestrator | 2025-11-01 13:12:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:12:20.900301 | orchestrator | 2025-11-01 13:12:20 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:12:20.901304 | orchestrator | 2025-11-01 13:12:20 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:12:20.902413 | orchestrator | 2025-11-01 13:12:20 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:12:20.903461 | orchestrator | 2025-11-01 13:12:20 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:12:20.903672 | orchestrator | 2025-11-01 13:12:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:12:23.939402 | orchestrator | 2025-11-01 13:12:23 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:12:23.941199 | orchestrator | 2025-11-01 13:12:23 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:12:23.942464 | orchestrator | 2025-11-01 13:12:23 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:12:23.944136 | orchestrator | 2025-11-01 13:12:23 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:12:23.944159 | orchestrator | 2025-11-01 13:12:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:12:26.985795 | orchestrator | 2025-11-01 13:12:26 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:12:26.987956 | orchestrator | 2025-11-01 13:12:26 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:12:26.988726 | orchestrator | 2025-11-01 13:12:26 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:12:26.989829 | orchestrator | 2025-11-01 13:12:26 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:12:26.990098 | orchestrator | 2025-11-01 13:12:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:12:30.025688 | orchestrator | 2025-11-01 13:12:30 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:12:30.026119 | orchestrator | 2025-11-01 13:12:30 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:12:30.028145 | orchestrator | 2025-11-01 13:12:30 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:12:30.029095 | orchestrator | 2025-11-01 13:12:30 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:12:30.029117 | orchestrator | 2025-11-01 13:12:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:12:33.061721 | orchestrator | 2025-11-01 13:12:33 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:12:33.062788 | orchestrator | 2025-11-01 13:12:33 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:12:33.063938 | orchestrator | 2025-11-01 13:12:33 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:12:33.065033 | orchestrator | 2025-11-01 13:12:33 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:12:33.065233 | orchestrator | 2025-11-01 13:12:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:12:36.102004 | orchestrator | 2025-11-01 13:12:36 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:12:36.103140 | orchestrator | 2025-11-01 13:12:36 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:12:36.105604 | orchestrator | 2025-11-01 13:12:36 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:12:36.107378 | orchestrator | 2025-11-01 13:12:36 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:12:36.108201 | orchestrator | 2025-11-01 13:12:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:12:39.141240 | orchestrator | 2025-11-01 13:12:39 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:12:39.142521 | orchestrator | 2025-11-01 13:12:39 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:12:39.145348 | orchestrator | 2025-11-01 13:12:39 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:12:39.146569 | orchestrator | 2025-11-01 13:12:39 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:12:39.146593 | orchestrator | 2025-11-01 13:12:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:12:42.185014 | orchestrator | 2025-11-01 13:12:42 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:12:42.186471 | orchestrator | 2025-11-01 13:12:42 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:12:42.187336 | orchestrator | 2025-11-01 13:12:42 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:12:42.190445 | orchestrator | 2025-11-01 13:12:42 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:12:42.190471 | orchestrator | 2025-11-01 13:12:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:12:45.229823 | orchestrator | 2025-11-01 13:12:45 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:12:45.231045 | orchestrator | 2025-11-01 13:12:45 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:12:45.232981 | orchestrator | 2025-11-01 13:12:45 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:12:45.233981 | orchestrator | 2025-11-01 13:12:45 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:12:45.234158 | orchestrator | 2025-11-01 13:12:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:12:48.270542 | orchestrator | 2025-11-01 13:12:48 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:12:48.271353 | orchestrator | 2025-11-01 13:12:48 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:12:48.272313 | orchestrator | 2025-11-01 13:12:48 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:12:48.273338 | orchestrator | 2025-11-01 13:12:48 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:12:48.273363 | orchestrator | 2025-11-01 13:12:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:12:51.317248 | orchestrator | 2025-11-01 13:12:51 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:12:51.317366 | orchestrator | 2025-11-01 13:12:51 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:12:51.317380 | orchestrator | 2025-11-01 13:12:51 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:12:51.319930 | orchestrator | 2025-11-01 13:12:51 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:12:51.319954 | orchestrator | 2025-11-01 13:12:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:12:54.357806 | orchestrator | 2025-11-01 13:12:54 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:12:54.357894 | orchestrator | 2025-11-01 13:12:54 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:12:54.357908 | orchestrator | 2025-11-01 13:12:54 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:12:54.358765 | orchestrator | 2025-11-01 13:12:54 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:12:54.358796 | orchestrator | 2025-11-01 13:12:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:12:57.394255 | orchestrator | 2025-11-01 13:12:57 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:12:57.395238 | orchestrator | 2025-11-01 13:12:57 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:12:57.396050 | orchestrator | 2025-11-01 13:12:57 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:12:57.397519 | orchestrator | 2025-11-01 13:12:57 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:12:57.397541 | orchestrator | 2025-11-01 13:12:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:13:00.432352 | orchestrator | 2025-11-01 13:13:00 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:13:00.433440 | orchestrator | 2025-11-01 13:13:00 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:13:00.434609 | orchestrator | 2025-11-01 13:13:00 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:13:00.437612 | orchestrator | 2025-11-01 13:13:00 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:13:00.437636 | orchestrator | 2025-11-01 13:13:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:13:03.494462 | orchestrator | 2025-11-01 13:13:03 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:13:03.495116 | orchestrator | 2025-11-01 13:13:03 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:13:03.496625 | orchestrator | 2025-11-01 13:13:03 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:13:03.497946 | orchestrator | 2025-11-01 13:13:03 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:13:03.498106 | orchestrator | 2025-11-01 13:13:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:13:06.553728 | orchestrator | 2025-11-01 13:13:06 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:13:06.553846 | orchestrator | 2025-11-01 13:13:06 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:13:06.553865 | orchestrator | 2025-11-01 13:13:06 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:13:06.553877 | orchestrator | 2025-11-01 13:13:06 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:13:06.553890 | orchestrator | 2025-11-01 13:13:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:13:09.588495 | orchestrator | 2025-11-01 13:13:09 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:13:09.588577 | orchestrator | 2025-11-01 13:13:09 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:13:09.589664 | orchestrator | 2025-11-01 13:13:09 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:13:09.590922 | orchestrator | 2025-11-01 13:13:09 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:13:09.590963 | orchestrator | 2025-11-01 13:13:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:13:12.647941 | orchestrator | 2025-11-01 13:13:12 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:13:12.651025 | orchestrator | 2025-11-01 13:13:12 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:13:12.653239 | orchestrator | 2025-11-01 13:13:12 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:13:12.654504 | orchestrator | 2025-11-01 13:13:12 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:13:12.654529 | orchestrator | 2025-11-01 13:13:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:13:15.725073 | orchestrator | 2025-11-01 13:13:15 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:13:15.727991 | orchestrator | 2025-11-01 13:13:15 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:13:15.729500 | orchestrator | 2025-11-01 13:13:15 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:13:15.730571 | orchestrator | 2025-11-01 13:13:15 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:13:15.730978 | orchestrator | 2025-11-01 13:13:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:13:18.774151 | orchestrator | 2025-11-01 13:13:18 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:13:18.775104 | orchestrator | 2025-11-01 13:13:18 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:13:18.776034 | orchestrator | 2025-11-01 13:13:18 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:13:18.777171 | orchestrator | 2025-11-01 13:13:18 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:13:18.777306 | orchestrator | 2025-11-01 13:13:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:13:21.812468 | orchestrator | 2025-11-01 13:13:21 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:13:21.813611 | orchestrator | 2025-11-01 13:13:21 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:13:21.815017 | orchestrator | 2025-11-01 13:13:21 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:13:21.816594 | orchestrator | 2025-11-01 13:13:21 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:13:21.816613 | orchestrator | 2025-11-01 13:13:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:13:24.857219 | orchestrator | 2025-11-01 13:13:24 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:13:24.858493 | orchestrator | 2025-11-01 13:13:24 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:13:24.861646 | orchestrator | 2025-11-01 13:13:24 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:13:24.865342 | orchestrator | 2025-11-01 13:13:24 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:13:24.865969 | orchestrator | 2025-11-01 13:13:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:13:27.906166 | orchestrator | 2025-11-01 13:13:27 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:13:27.906702 | orchestrator | 2025-11-01 13:13:27 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:13:27.909801 | orchestrator | 2025-11-01 13:13:27 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:13:27.910827 | orchestrator | 2025-11-01 13:13:27 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:13:27.910912 | orchestrator | 2025-11-01 13:13:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:13:30.957126 | orchestrator | 2025-11-01 13:13:30 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:13:30.959025 | orchestrator | 2025-11-01 13:13:30 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:13:30.961504 | orchestrator | 2025-11-01 13:13:30 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:13:30.963927 | orchestrator | 2025-11-01 13:13:30 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:13:30.963952 | orchestrator | 2025-11-01 13:13:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:13:34.023982 | orchestrator | 2025-11-01 13:13:34 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:13:34.044910 | orchestrator | 2025-11-01 13:13:34 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:13:34.046480 | orchestrator | 2025-11-01 13:13:34 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:13:34.047922 | orchestrator | 2025-11-01 13:13:34 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:13:34.047948 | orchestrator | 2025-11-01 13:13:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:13:37.088808 | orchestrator | 2025-11-01 13:13:37 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:13:37.090462 | orchestrator | 2025-11-01 13:13:37 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:13:37.091820 | orchestrator | 2025-11-01 13:13:37 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:13:37.092881 | orchestrator | 2025-11-01 13:13:37 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:13:37.092902 | orchestrator | 2025-11-01 13:13:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:13:40.130805 | orchestrator | 2025-11-01 13:13:40 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:13:40.131873 | orchestrator | 2025-11-01 13:13:40 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:13:40.133851 | orchestrator | 2025-11-01 13:13:40 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:13:40.134834 | orchestrator | 2025-11-01 13:13:40 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:13:40.134858 | orchestrator | 2025-11-01 13:13:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:13:43.182250 | orchestrator | 2025-11-01 13:13:43 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:13:43.183005 | orchestrator | 2025-11-01 13:13:43 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:13:43.184032 | orchestrator | 2025-11-01 13:13:43 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:13:43.185295 | orchestrator | 2025-11-01 13:13:43 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:13:43.185316 | orchestrator | 2025-11-01 13:13:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:13:46.234190 | orchestrator | 2025-11-01 13:13:46 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:13:46.235600 | orchestrator | 2025-11-01 13:13:46 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:13:46.237377 | orchestrator | 2025-11-01 13:13:46 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:13:46.238403 | orchestrator | 2025-11-01 13:13:46 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:13:46.238423 | orchestrator | 2025-11-01 13:13:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:13:49.274851 | orchestrator | 2025-11-01 13:13:49 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:13:49.277116 | orchestrator | 2025-11-01 13:13:49 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:13:49.279425 | orchestrator | 2025-11-01 13:13:49 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:13:49.281074 | orchestrator | 2025-11-01 13:13:49 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:13:49.281096 | orchestrator | 2025-11-01 13:13:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:13:52.326824 | orchestrator | 2025-11-01 13:13:52 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:13:52.328414 | orchestrator | 2025-11-01 13:13:52 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:13:52.329245 | orchestrator | 2025-11-01 13:13:52 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:13:52.330442 | orchestrator | 2025-11-01 13:13:52 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:13:52.330470 | orchestrator | 2025-11-01 13:13:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:13:55.366651 | orchestrator | 2025-11-01 13:13:55 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:13:55.373548 | orchestrator | 2025-11-01 13:13:55 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:13:55.378409 | orchestrator | 2025-11-01 13:13:55 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:13:55.383119 | orchestrator | 2025-11-01 13:13:55 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:13:55.383144 | orchestrator | 2025-11-01 13:13:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:13:58.435667 | orchestrator | 2025-11-01 13:13:58 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:13:58.435749 | orchestrator | 2025-11-01 13:13:58 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:13:58.435762 | orchestrator | 2025-11-01 13:13:58 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:13:58.435773 | orchestrator | 2025-11-01 13:13:58 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:13:58.435784 | orchestrator | 2025-11-01 13:13:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:14:01.556549 | orchestrator | 2025-11-01 13:14:01 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:14:01.557154 | orchestrator | 2025-11-01 13:14:01 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:14:01.558741 | orchestrator | 2025-11-01 13:14:01 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:14:01.564645 | orchestrator | 2025-11-01 13:14:01 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:14:01.564670 | orchestrator | 2025-11-01 13:14:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:14:04.679038 | orchestrator | 2025-11-01 13:14:04 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:14:04.679123 | orchestrator | 2025-11-01 13:14:04 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:14:04.681830 | orchestrator | 2025-11-01 13:14:04 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:14:04.684005 | orchestrator | 2025-11-01 13:14:04 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:14:04.684027 | orchestrator | 2025-11-01 13:14:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:14:07.759838 | orchestrator | 2025-11-01 13:14:07 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:14:07.760642 | orchestrator | 2025-11-01 13:14:07 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:14:07.762506 | orchestrator | 2025-11-01 13:14:07 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:14:07.763357 | orchestrator | 2025-11-01 13:14:07 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:14:07.763450 | orchestrator | 2025-11-01 13:14:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:14:10.808445 | orchestrator | 2025-11-01 13:14:10 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:14:10.809321 | orchestrator | 2025-11-01 13:14:10 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:14:10.811462 | orchestrator | 2025-11-01 13:14:10 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:14:10.812521 | orchestrator | 2025-11-01 13:14:10 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:14:10.812545 | orchestrator | 2025-11-01 13:14:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:14:13.893735 | orchestrator | 2025-11-01 13:14:13 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:14:13.894871 | orchestrator | 2025-11-01 13:14:13 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:14:13.895876 | orchestrator | 2025-11-01 13:14:13 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:14:13.896819 | orchestrator | 2025-11-01 13:14:13 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:14:13.897345 | orchestrator | 2025-11-01 13:14:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:14:16.955145 | orchestrator | 2025-11-01 13:14:16 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:14:16.957412 | orchestrator | 2025-11-01 13:14:16 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:14:16.959255 | orchestrator | 2025-11-01 13:14:16 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:14:16.963249 | orchestrator | 2025-11-01 13:14:16 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:14:16.963298 | orchestrator | 2025-11-01 13:14:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:14:20.004105 | orchestrator | 2025-11-01 13:14:20 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:14:20.004703 | orchestrator | 2025-11-01 13:14:20 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:14:20.006815 | orchestrator | 2025-11-01 13:14:20 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:14:20.007760 | orchestrator | 2025-11-01 13:14:20 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:14:20.007783 | orchestrator | 2025-11-01 13:14:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:14:23.048612 | orchestrator | 2025-11-01 13:14:23 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:14:23.049353 | orchestrator | 2025-11-01 13:14:23 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:14:23.052030 | orchestrator | 2025-11-01 13:14:23 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:14:23.053048 | orchestrator | 2025-11-01 13:14:23 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:14:23.053073 | orchestrator | 2025-11-01 13:14:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:14:26.112750 | orchestrator | 2025-11-01 13:14:26 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:14:26.114489 | orchestrator | 2025-11-01 13:14:26 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:14:26.115518 | orchestrator | 2025-11-01 13:14:26 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:14:26.116648 | orchestrator | 2025-11-01 13:14:26 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:14:26.116670 | orchestrator | 2025-11-01 13:14:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:14:29.158128 | orchestrator | 2025-11-01 13:14:29 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:14:29.159904 | orchestrator | 2025-11-01 13:14:29 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:14:29.164641 | orchestrator | 2025-11-01 13:14:29 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:14:29.166156 | orchestrator | 2025-11-01 13:14:29 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:14:29.166596 | orchestrator | 2025-11-01 13:14:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:14:32.209972 | orchestrator | 2025-11-01 13:14:32 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:14:32.211616 | orchestrator | 2025-11-01 13:14:32 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:14:32.213588 | orchestrator | 2025-11-01 13:14:32 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:14:32.214719 | orchestrator | 2025-11-01 13:14:32 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:14:32.215001 | orchestrator | 2025-11-01 13:14:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:14:35.250882 | orchestrator | 2025-11-01 13:14:35 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:14:35.252203 | orchestrator | 2025-11-01 13:14:35 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:14:35.254414 | orchestrator | 2025-11-01 13:14:35 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:14:35.256115 | orchestrator | 2025-11-01 13:14:35 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:14:35.256141 | orchestrator | 2025-11-01 13:14:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:14:38.301766 | orchestrator | 2025-11-01 13:14:38 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:14:38.304178 | orchestrator | 2025-11-01 13:14:38 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:14:38.306857 | orchestrator | 2025-11-01 13:14:38 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:14:38.309714 | orchestrator | 2025-11-01 13:14:38 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:14:38.310416 | orchestrator | 2025-11-01 13:14:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:14:41.363638 | orchestrator | 2025-11-01 13:14:41 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:14:41.364927 | orchestrator | 2025-11-01 13:14:41 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state STARTED 2025-11-01 13:14:41.366701 | orchestrator | 2025-11-01 13:14:41 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:14:41.368463 | orchestrator | 2025-11-01 13:14:41 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:14:41.368486 | orchestrator | 2025-11-01 13:14:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:14:44.409599 | orchestrator | 2025-11-01 13:14:44 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state STARTED 2025-11-01 13:14:44.417255 | orchestrator | 2025-11-01 13:14:44.417328 | orchestrator | 2025-11-01 13:14:44.417342 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:14:44.417354 | orchestrator | 2025-11-01 13:14:44.417366 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:14:44.417377 | orchestrator | Saturday 01 November 2025 13:10:49 +0000 (0:00:00.347) 0:00:00.347 ***** 2025-11-01 13:14:44.417389 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:14:44.417401 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:14:44.417413 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:14:44.417424 | orchestrator | 2025-11-01 13:14:44.417435 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:14:44.417446 | orchestrator | Saturday 01 November 2025 13:10:49 +0000 (0:00:00.364) 0:00:00.711 ***** 2025-11-01 13:14:44.417457 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-11-01 13:14:44.417468 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-11-01 13:14:44.417504 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-11-01 13:14:44.417515 | orchestrator | 2025-11-01 13:14:44.417526 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-11-01 13:14:44.417537 | orchestrator | 2025-11-01 13:14:44.417548 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-11-01 13:14:44.417559 | orchestrator | Saturday 01 November 2025 13:10:50 +0000 (0:00:00.649) 0:00:01.361 ***** 2025-11-01 13:14:44.417569 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:14:44.417581 | orchestrator | 2025-11-01 13:14:44.417592 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-11-01 13:14:44.417603 | orchestrator | Saturday 01 November 2025 13:10:51 +0000 (0:00:01.004) 0:00:02.366 ***** 2025-11-01 13:14:44.417614 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-11-01 13:14:44.417625 | orchestrator | 2025-11-01 13:14:44.417636 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-11-01 13:14:44.417647 | orchestrator | Saturday 01 November 2025 13:10:58 +0000 (0:00:06.776) 0:00:09.142 ***** 2025-11-01 13:14:44.417658 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-11-01 13:14:44.417669 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-11-01 13:14:44.417680 | orchestrator | 2025-11-01 13:14:44.417691 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-11-01 13:14:44.417702 | orchestrator | Saturday 01 November 2025 13:11:06 +0000 (0:00:07.906) 0:00:17.049 ***** 2025-11-01 13:14:44.417713 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-11-01 13:14:44.417724 | orchestrator | 2025-11-01 13:14:44.417734 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-11-01 13:14:44.417745 | orchestrator | Saturday 01 November 2025 13:11:09 +0000 (0:00:03.742) 0:00:20.792 ***** 2025-11-01 13:14:44.417757 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-01 13:14:44.417783 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-11-01 13:14:44.417794 | orchestrator | 2025-11-01 13:14:44.417805 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-11-01 13:14:44.417816 | orchestrator | Saturday 01 November 2025 13:11:14 +0000 (0:00:04.717) 0:00:25.510 ***** 2025-11-01 13:14:44.417872 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-01 13:14:44.417885 | orchestrator | 2025-11-01 13:14:44.417897 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-11-01 13:14:44.417946 | orchestrator | Saturday 01 November 2025 13:11:19 +0000 (0:00:04.659) 0:00:30.169 ***** 2025-11-01 13:14:44.417959 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-11-01 13:14:44.417971 | orchestrator | 2025-11-01 13:14:44.417983 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-11-01 13:14:44.417994 | orchestrator | Saturday 01 November 2025 13:11:25 +0000 (0:00:05.864) 0:00:36.034 ***** 2025-11-01 13:14:44.418081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 13:14:44.418115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 13:14:44.418135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 13:14:44.418156 | orchestrator | 2025-11-01 13:14:44.418167 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-11-01 13:14:44.418178 | orchestrator | Saturday 01 November 2025 13:11:36 +0000 (0:00:11.650) 0:00:47.684 ***** 2025-11-01 13:14:44.418189 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:14:44.418200 | orchestrator | 2025-11-01 13:14:44.418218 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-11-01 13:14:44.418229 | orchestrator | Saturday 01 November 2025 13:11:37 +0000 (0:00:01.003) 0:00:48.688 ***** 2025-11-01 13:14:44.418240 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:14:44.418251 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:14:44.418262 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:14:44.418273 | orchestrator | 2025-11-01 13:14:44.418308 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-11-01 13:14:44.418319 | orchestrator | Saturday 01 November 2025 13:11:43 +0000 (0:00:05.726) 0:00:54.414 ***** 2025-11-01 13:14:44.418330 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-01 13:14:44.418341 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-01 13:14:44.418352 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-01 13:14:44.418363 | orchestrator | 2025-11-01 13:14:44.418374 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-11-01 13:14:44.418384 | orchestrator | Saturday 01 November 2025 13:11:45 +0000 (0:00:01.778) 0:00:56.193 ***** 2025-11-01 13:14:44.418395 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-01 13:14:44.418406 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-01 13:14:44.418416 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-11-01 13:14:44.418427 | orchestrator | 2025-11-01 13:14:44.418438 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-11-01 13:14:44.418448 | orchestrator | Saturday 01 November 2025 13:11:46 +0000 (0:00:01.302) 0:00:57.496 ***** 2025-11-01 13:14:44.418459 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:14:44.418470 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:14:44.418480 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:14:44.418491 | orchestrator | 2025-11-01 13:14:44.418501 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-11-01 13:14:44.418512 | orchestrator | Saturday 01 November 2025 13:11:47 +0000 (0:00:00.715) 0:00:58.212 ***** 2025-11-01 13:14:44.418523 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:44.418534 | orchestrator | 2025-11-01 13:14:44.418544 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-11-01 13:14:44.418555 | orchestrator | Saturday 01 November 2025 13:11:47 +0000 (0:00:00.372) 0:00:58.584 ***** 2025-11-01 13:14:44.418565 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:44.418576 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:44.418587 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:44.418597 | orchestrator | 2025-11-01 13:14:44.418608 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-11-01 13:14:44.418623 | orchestrator | Saturday 01 November 2025 13:11:48 +0000 (0:00:00.359) 0:00:58.943 ***** 2025-11-01 13:14:44.418634 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:14:44.418645 | orchestrator | 2025-11-01 13:14:44.418656 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-11-01 13:14:44.418673 | orchestrator | Saturday 01 November 2025 13:11:48 +0000 (0:00:00.603) 0:00:59.546 ***** 2025-11-01 13:14:44.418690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 13:14:44.418704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 13:14:44.418722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 13:14:44.418741 | orchestrator | 2025-11-01 13:14:44.418752 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-11-01 13:14:44.418763 | orchestrator | Saturday 01 November 2025 13:11:53 +0000 (0:00:05.229) 0:01:04.776 ***** 2025-11-01 13:14:44.418783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-01 13:14:44.418796 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:44.418813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-01 13:14:44.418832 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:44.418851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-01 13:14:44.418863 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:44.418874 | orchestrator | 2025-11-01 13:14:44.418885 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-11-01 13:14:44.418896 | orchestrator | Saturday 01 November 2025 13:12:00 +0000 (0:00:06.411) 0:01:11.188 ***** 2025-11-01 13:14:44.418913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-01 13:14:44.418932 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:44.418950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-01 13:14:44.418962 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:44.418974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-11-01 13:14:44.418999 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:44.419010 | orchestrator | 2025-11-01 13:14:44.419021 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-11-01 13:14:44.419036 | orchestrator | Saturday 01 November 2025 13:12:04 +0000 (0:00:04.486) 0:01:15.675 ***** 2025-11-01 13:14:44.419047 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:44.419058 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:44.419068 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:44.419079 | orchestrator | 2025-11-01 13:14:44.419089 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-11-01 13:14:44.419100 | orchestrator | Saturday 01 November 2025 13:12:08 +0000 (0:00:04.010) 0:01:19.685 ***** 2025-11-01 13:14:44.419116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 13:14:44.419129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 13:14:44.419153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 13:14:44.419165 | orchestrator | 2025-11-01 13:14:44.419176 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-11-01 13:14:44.419187 | orchestrator | Saturday 01 November 2025 13:12:14 +0000 (0:00:05.961) 0:01:25.646 ***** 2025-11-01 13:14:44.419197 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:14:44.419208 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:14:44.419219 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:14:44.419229 | orchestrator | 2025-11-01 13:14:44.419240 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-11-01 13:14:44.419251 | orchestrator | Saturday 01 November 2025 13:12:24 +0000 (0:00:09.916) 0:01:35.563 ***** 2025-11-01 13:14:44.419262 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:44.419272 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:44.419299 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:44.419310 | orchestrator | 2025-11-01 13:14:44.419321 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-11-01 13:14:44.419339 | orchestrator | Saturday 2025-11-01 13:14:44 | INFO  | Task c85b9efe-09eb-4feb-a90d-0cd09e6e2a99 is in state SUCCESS 2025-11-01 13:14:44.419351 | orchestrator | 01 November 2025 13:12:30 +0000 (0:00:06.253) 0:01:41.817 ***** 2025-11-01 13:14:44.419362 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:44.419372 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:44.419383 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:44.419394 | orchestrator | 2025-11-01 13:14:44.419404 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-11-01 13:14:44.419415 | orchestrator | Saturday 01 November 2025 13:12:37 +0000 (0:00:06.183) 0:01:48.001 ***** 2025-11-01 13:14:44.419426 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:44.419437 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:44.419447 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:44.419464 | orchestrator | 2025-11-01 13:14:44.419475 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-11-01 13:14:44.419486 | orchestrator | Saturday 01 November 2025 13:12:44 +0000 (0:00:07.261) 0:01:55.263 ***** 2025-11-01 13:14:44.419496 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:44.419507 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:44.419517 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:44.419528 | orchestrator | 2025-11-01 13:14:44.419538 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-11-01 13:14:44.419549 | orchestrator | Saturday 01 November 2025 13:12:51 +0000 (0:00:07.327) 0:02:02.590 ***** 2025-11-01 13:14:44.419560 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:44.419571 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:44.419581 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:44.419592 | orchestrator | 2025-11-01 13:14:44.419602 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-11-01 13:14:44.419613 | orchestrator | Saturday 01 November 2025 13:12:52 +0000 (0:00:00.498) 0:02:03.088 ***** 2025-11-01 13:14:44.419623 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-11-01 13:14:44.419634 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:44.419645 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-11-01 13:14:44.419656 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:44.419666 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-11-01 13:14:44.419677 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:44.419687 | orchestrator | 2025-11-01 13:14:44.419698 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-11-01 13:14:44.419708 | orchestrator | Saturday 01 November 2025 13:12:57 +0000 (0:00:05.521) 0:02:08.610 ***** 2025-11-01 13:14:44.419725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 13:14:44.419746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 13:14:44.419770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-11-01 13:14:44.419783 | orchestrator | 2025-11-01 13:14:44.419793 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-11-01 13:14:44.419804 | orchestrator | Saturday 01 November 2025 13:13:10 +0000 (0:00:12.578) 0:02:21.188 ***** 2025-11-01 13:14:44.419815 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:44.419826 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:44.419837 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:44.419847 | orchestrator | 2025-11-01 13:14:44.419858 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-11-01 13:14:44.419869 | orchestrator | Saturday 01 November 2025 13:13:10 +0000 (0:00:00.686) 0:02:21.875 ***** 2025-11-01 13:14:44.419880 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:14:44.419890 | orchestrator | 2025-11-01 13:14:44.419901 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-11-01 13:14:44.419912 | orchestrator | Saturday 01 November 2025 13:13:13 +0000 (0:00:02.763) 0:02:24.638 ***** 2025-11-01 13:14:44.419929 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:14:44.419940 | orchestrator | 2025-11-01 13:14:44.419950 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-11-01 13:14:44.419961 | orchestrator | Saturday 01 November 2025 13:13:16 +0000 (0:00:03.040) 0:02:27.678 ***** 2025-11-01 13:14:44.419972 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:14:44.419982 | orchestrator | 2025-11-01 13:14:44.419993 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-11-01 13:14:44.420004 | orchestrator | Saturday 01 November 2025 13:13:19 +0000 (0:00:02.936) 0:02:30.615 ***** 2025-11-01 13:14:44.420015 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:14:44.420025 | orchestrator | 2025-11-01 13:14:44.420036 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-11-01 13:14:44.420053 | orchestrator | Saturday 01 November 2025 13:13:55 +0000 (0:00:36.209) 0:03:06.825 ***** 2025-11-01 13:14:44.420064 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:14:44.420075 | orchestrator | 2025-11-01 13:14:44.420085 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-11-01 13:14:44.420096 | orchestrator | Saturday 01 November 2025 13:13:58 +0000 (0:00:02.946) 0:03:09.771 ***** 2025-11-01 13:14:44.420107 | orchestrator | 2025-11-01 13:14:44.420117 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-11-01 13:14:44.420128 | orchestrator | Saturday 01 November 2025 13:13:59 +0000 (0:00:00.361) 0:03:10.133 ***** 2025-11-01 13:14:44.420139 | orchestrator | 2025-11-01 13:14:44.420149 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-11-01 13:14:44.420160 | orchestrator | Saturday 01 November 2025 13:13:59 +0000 (0:00:00.547) 0:03:10.681 ***** 2025-11-01 13:14:44.420170 | orchestrator | 2025-11-01 13:14:44.420181 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-11-01 13:14:44.420191 | orchestrator | Saturday 01 November 2025 13:14:00 +0000 (0:00:00.570) 0:03:11.252 ***** 2025-11-01 13:14:44.420202 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:14:44.420212 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:14:44.420223 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:14:44.420234 | orchestrator | 2025-11-01 13:14:44.420244 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:14:44.420256 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-11-01 13:14:44.420267 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-01 13:14:44.420296 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-01 13:14:44.420308 | orchestrator | 2025-11-01 13:14:44.420318 | orchestrator | 2025-11-01 13:14:44.420329 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:14:44.420339 | orchestrator | Saturday 01 November 2025 13:14:41 +0000 (0:00:41.580) 0:03:52.832 ***** 2025-11-01 13:14:44.420350 | orchestrator | =============================================================================== 2025-11-01 13:14:44.420361 | orchestrator | glance : Restart glance-api container ---------------------------------- 41.58s 2025-11-01 13:14:44.420371 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 36.21s 2025-11-01 13:14:44.420382 | orchestrator | glance : Check glance containers --------------------------------------- 12.58s 2025-11-01 13:14:44.420392 | orchestrator | glance : Ensuring config directories exist ----------------------------- 11.65s 2025-11-01 13:14:44.420403 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 9.92s 2025-11-01 13:14:44.420418 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.91s 2025-11-01 13:14:44.420429 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 7.33s 2025-11-01 13:14:44.420446 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 7.26s 2025-11-01 13:14:44.420457 | orchestrator | service-ks-register : glance | Creating services ------------------------ 6.78s 2025-11-01 13:14:44.420467 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 6.41s 2025-11-01 13:14:44.420478 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.25s 2025-11-01 13:14:44.420488 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 6.18s 2025-11-01 13:14:44.420499 | orchestrator | glance : Copying over config.json files for services -------------------- 5.96s 2025-11-01 13:14:44.420510 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 5.86s 2025-11-01 13:14:44.420520 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 5.73s 2025-11-01 13:14:44.420531 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 5.52s 2025-11-01 13:14:44.420541 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.23s 2025-11-01 13:14:44.420552 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.72s 2025-11-01 13:14:44.420562 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 4.66s 2025-11-01 13:14:44.420573 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.49s 2025-11-01 13:14:44.424471 | orchestrator | 2025-11-01 13:14:44 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:14:44.424561 | orchestrator | 2025-11-01 13:14:44 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:14:44.424576 | orchestrator | 2025-11-01 13:14:44 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:14:44.424588 | orchestrator | 2025-11-01 13:14:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:14:47.496242 | orchestrator | 2025-11-01 13:14:47 | INFO  | Task cbe0dd45-f5ab-45ad-94cf-e9b12ed55a1c is in state SUCCESS 2025-11-01 13:14:47.497563 | orchestrator | 2025-11-01 13:14:47.497588 | orchestrator | 2025-11-01 13:14:47.497597 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:14:47.497605 | orchestrator | 2025-11-01 13:14:47.497613 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:14:47.497621 | orchestrator | Saturday 01 November 2025 13:10:41 +0000 (0:00:00.323) 0:00:00.323 ***** 2025-11-01 13:14:47.497629 | orchestrator | ok: [testbed-manager] 2025-11-01 13:14:47.497637 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:14:47.497645 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:14:47.497652 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:14:47.497659 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:14:47.497666 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:14:47.497673 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:14:47.497680 | orchestrator | 2025-11-01 13:14:47.497687 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:14:47.497695 | orchestrator | Saturday 01 November 2025 13:10:42 +0000 (0:00:01.072) 0:00:01.396 ***** 2025-11-01 13:14:47.497702 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-11-01 13:14:47.497710 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-11-01 13:14:47.497718 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-11-01 13:14:47.497725 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-11-01 13:14:47.497732 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-11-01 13:14:47.497739 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-11-01 13:14:47.497746 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-11-01 13:14:47.497753 | orchestrator | 2025-11-01 13:14:47.497760 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-11-01 13:14:47.497787 | orchestrator | 2025-11-01 13:14:47.497795 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-11-01 13:14:47.497802 | orchestrator | Saturday 01 November 2025 13:10:43 +0000 (0:00:00.822) 0:00:02.219 ***** 2025-11-01 13:14:47.497810 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:14:47.497818 | orchestrator | 2025-11-01 13:14:47.497826 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-11-01 13:14:47.497833 | orchestrator | Saturday 01 November 2025 13:10:44 +0000 (0:00:01.705) 0:00:03.924 ***** 2025-11-01 13:14:47.497856 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-01 13:14:47.497868 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.497876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.497884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.497902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.497911 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.497927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.497936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.497947 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.497955 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.497963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.497977 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-01 13:14:47.497988 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.498001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.498009 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.498058 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.498066 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.498074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.498082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.498097 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.498105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.498118 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.498125 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.498137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.498145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.498153 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.498160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.498173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.498189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.498197 | orchestrator | 2025-11-01 13:14:47.498204 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-11-01 13:14:47.498212 | orchestrator | Saturday 01 November 2025 13:10:48 +0000 (0:00:03.597) 0:00:07.521 ***** 2025-11-01 13:14:47.498219 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:14:47.498227 | orchestrator | 2025-11-01 13:14:47.498234 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-11-01 13:14:47.498242 | orchestrator | Saturday 01 November 2025 13:10:50 +0000 (0:00:01.505) 0:00:09.026 ***** 2025-11-01 13:14:47.498249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.498261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.498269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.498277 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-01 13:14:47.498312 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.498326 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.498334 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.498341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.498349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.498360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.498368 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.498376 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.498393 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.498401 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.498409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.498416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.498424 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.498435 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.498443 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.498450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.498471 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.498479 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-01 13:14:47.498488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.498496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.498507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.498515 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.498527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.498539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.498547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.498554 | orchestrator | 2025-11-01 13:14:47.498562 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-11-01 13:14:47.498569 | orchestrator | Saturday 01 November 2025 13:10:57 +0000 (0:00:07.075) 0:00:16.102 ***** 2025-11-01 13:14:47.498577 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-11-01 13:14:47.498584 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 13:14:47.498595 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 13:14:47.498604 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-11-01 13:14:47.498622 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:14:47.498629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 13:14:47.498637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:14:47.498645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:14:47.498653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 13:14:47.498664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:14:47.498672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 13:14:47.498684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:14:47.498696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:14:47.498704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 13:14:47.498711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:14:47.498719 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:14:47.498727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 13:14:47.498734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:14:47.498745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:14:47.498758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 13:14:47.498765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:14:47.498773 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:47.498780 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:47.498788 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:47.498800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 13:14:47.498808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 13:14:47.498815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 13:14:47.498823 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:14:47.498830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 13:14:47.498842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 13:14:47.498854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 13:14:47.498861 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:14:47.498869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 13:14:47.498876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 13:14:47.498889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 13:14:47.498897 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:14:47.498905 | orchestrator | 2025-11-01 13:14:47.498912 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-11-01 13:14:47.498919 | orchestrator | Saturday 01 November 2025 13:10:58 +0000 (0:00:01.831) 0:00:17.933 ***** 2025-11-01 13:14:47.498927 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-11-01 13:14:47.498935 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 13:14:47.498950 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 13:14:47.498958 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-11-01 13:14:47.498966 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:14:47.498974 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:14:47.498986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 13:14:47.498994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:14:47.499002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:14:47.499009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 13:14:47.499024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:14:47.499032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 13:14:47.499039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:14:47.499047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:14:47.499058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 13:14:47.499066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:14:47.499074 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:47.499081 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:47.499089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 13:14:47.499103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:14:47.499114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:14:47.499122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 13:14:47.499129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-11-01 13:14:47.499137 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:47.499148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 13:14:47.499156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 13:14:47.499164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 13:14:47.499171 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:14:47.499183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 13:14:47.499191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 13:14:47.499202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 13:14:47.499210 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:14:47.499217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-11-01 13:14:47.499225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-11-01 13:14:47.499237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-11-01 13:14:47.499245 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:14:47.499252 | orchestrator | 2025-11-01 13:14:47.499260 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-11-01 13:14:47.499267 | orchestrator | Saturday 01 November 2025 13:11:01 +0000 (0:00:02.594) 0:00:20.528 ***** 2025-11-01 13:14:47.499274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.499299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.499306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.499318 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-01 13:14:47.499326 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.499333 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.499345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.499353 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.499365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.499372 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.499380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.499391 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.499399 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.499407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.499419 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.499427 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.499441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.499449 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.499457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.499467 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.499475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.499483 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.499494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.499507 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-01 13:14:47.499515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.499523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.499534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.499542 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.499549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.499557 | orchestrator | 2025-11-01 13:14:47.499564 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-11-01 13:14:47.499571 | orchestrator | Saturday 01 November 2025 13:11:08 +0000 (0:00:06.629) 0:00:27.158 ***** 2025-11-01 13:14:47.499579 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 13:14:47.499591 | orchestrator | 2025-11-01 13:14:47.499599 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-11-01 13:14:47.499609 | orchestrator | Saturday 01 November 2025 13:11:09 +0000 (0:00:01.682) 0:00:28.840 ***** 2025-11-01 13:14:47.499617 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098288, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8418112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.499625 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098288, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8418112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.499633 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098288, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8418112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.499644 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098318, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.853499, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.499652 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098288, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8418112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.499660 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098288, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8418112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:14:47.499997 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098288, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8418112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500020 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098288, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8418112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500027 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098318, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.853499, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500035 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098318, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.853499, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500050 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098283, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8394716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500058 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098318, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.853499, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500066 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098303, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.847836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500083 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098283, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8394716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500091 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098318, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.853499, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500099 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098318, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.853499, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500106 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098283, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8394716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500118 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098283, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8394716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500126 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098283, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8394716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500133 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098277, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8390634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500149 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098303, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.847836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500157 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098283, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8394716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500164 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098291, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8431091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500172 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098318, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.853499, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:14:47.500179 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098303, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.847836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500190 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098303, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.847836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500198 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098277, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8390634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500211 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098303, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.847836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500222 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098298, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8474624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500230 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098277, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8390634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500238 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098277, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8390634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500245 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098303, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.847836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500256 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098277, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8390634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500264 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098292, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8434644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500277 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098291, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8431091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500337 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098291, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8431091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500346 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098287, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8412893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500353 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098298, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8474624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500361 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098283, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8394716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:14:47.500372 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098291, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8431091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500385 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098277, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8390634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500393 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098291, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8431091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500405 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098312, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.852229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500412 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098292, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8434644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500420 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098298, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8474624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500428 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098298, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8474624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500439 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098298, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8474624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500451 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098291, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8431091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500459 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098268, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8364186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500471 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098287, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8412893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500479 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098292, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8434644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500487 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098298, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8474624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500495 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098292, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8434644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500506 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098331, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.857309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500518 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098287, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8412893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500526 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098292, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8434644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500538 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098292, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8434644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500546 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098303, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.847836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:14:47.500554 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098287, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8412893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500562 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098312, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.852229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500579 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098287, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8412893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500587 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098287, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8412893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500595 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098307, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8501596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500607 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098268, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8364186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500616 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098312, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.852229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500624 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098312, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.852229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500633 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098280, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8393042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500650 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098312, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.852229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500659 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098331, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.857309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500668 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098268, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8364186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500681 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098312, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.852229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500689 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098307, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8501596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500698 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098268, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8364186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500706 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098270, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8369408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500722 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098268, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8364186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500731 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098280, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8393042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500740 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098277, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8390634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:14:47.500754 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098268, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8364186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500763 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098270, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8369408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500771 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098331, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.857309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500780 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098331, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.857309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500796 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098331, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.857309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500805 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098297, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8452253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500814 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098297, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8452253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500827 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098331, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.857309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500836 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098307, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8501596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500844 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098307, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8501596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500857 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098294, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8439822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500868 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098280, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8393042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500877 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098307, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8501596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500884 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098291, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8431091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:14:47.500897 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098307, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8501596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500905 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098294, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8439822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500913 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098329, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.857309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500925 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:47.500934 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098280, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8393042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500946 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098280, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8393042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500954 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098280, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8393042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500962 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098270, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8369408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500974 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098270, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8369408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500981 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098270, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8369408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500988 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098329, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.857309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.500999 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:14:47.501006 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098270, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8369408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.501016 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098297, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8452253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.501024 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098297, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8452253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.501031 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098297, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8452253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.501041 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098298, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8474624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:14:47.501048 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098294, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8439822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.501059 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098297, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8452253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.501067 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098294, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8439822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.501077 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098329, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.857309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.501084 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098294, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8439822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.501091 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:47.501098 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098294, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8439822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.501108 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098329, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.857309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.501115 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:14:47.501122 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098329, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.857309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.501135 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:47.501314 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098292, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8434644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:14:47.501324 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098329, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.857309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-11-01 13:14:47.501331 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:14:47.501342 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098287, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8412893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:14:47.501350 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098312, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.852229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:14:47.501357 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098268, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8364186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:14:47.501363 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098331, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.857309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:14:47.501375 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098307, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8501596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:14:47.501386 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098280, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8393042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:14:47.501394 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098270, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8369408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:14:47.501401 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098297, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8452253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:14:47.501408 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098294, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8439822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:14:47.501415 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098329, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.857309, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-11-01 13:14:47.501422 | orchestrator | 2025-11-01 13:14:47.501429 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-11-01 13:14:47.501436 | orchestrator | Saturday 01 November 2025 13:11:51 +0000 (0:00:42.077) 0:01:10.917 ***** 2025-11-01 13:14:47.501443 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 13:14:47.501454 | orchestrator | 2025-11-01 13:14:47.501461 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-11-01 13:14:47.501468 | orchestrator | Saturday 01 November 2025 13:11:52 +0000 (0:00:00.865) 0:01:11.783 ***** 2025-11-01 13:14:47.501475 | orchestrator | [WARNING]: Skipped 2025-11-01 13:14:47.501482 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 13:14:47.501489 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-11-01 13:14:47.501496 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 13:14:47.501502 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-11-01 13:14:47.501509 | orchestrator | [WARNING]: Skipped 2025-11-01 13:14:47.501540 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 13:14:47.501547 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-11-01 13:14:47.501554 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 13:14:47.501560 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-11-01 13:14:47.501567 | orchestrator | [WARNING]: Skipped 2025-11-01 13:14:47.501574 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 13:14:47.501581 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-11-01 13:14:47.501587 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 13:14:47.501594 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-11-01 13:14:47.501601 | orchestrator | [WARNING]: Skipped 2025-11-01 13:14:47.501610 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 13:14:47.501617 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-11-01 13:14:47.501624 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 13:14:47.501630 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-11-01 13:14:47.501637 | orchestrator | [WARNING]: Skipped 2025-11-01 13:14:47.501644 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 13:14:47.501650 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-11-01 13:14:47.501657 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 13:14:47.501663 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-11-01 13:14:47.501670 | orchestrator | [WARNING]: Skipped 2025-11-01 13:14:47.501677 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 13:14:47.501683 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-11-01 13:14:47.501690 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 13:14:47.501697 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-11-01 13:14:47.501703 | orchestrator | [WARNING]: Skipped 2025-11-01 13:14:47.501710 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 13:14:47.501717 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-11-01 13:14:47.501724 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-11-01 13:14:47.501733 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-11-01 13:14:47.501740 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-11-01 13:14:47.501747 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 13:14:47.501754 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 13:14:47.501760 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-11-01 13:14:47.501767 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-01 13:14:47.501774 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-01 13:14:47.501780 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-01 13:14:47.501787 | orchestrator | 2025-11-01 13:14:47.501798 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-11-01 13:14:47.501805 | orchestrator | Saturday 01 November 2025 13:11:56 +0000 (0:00:03.334) 0:01:15.117 ***** 2025-11-01 13:14:47.501812 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-01 13:14:47.501819 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:47.501825 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-01 13:14:47.501832 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:47.501839 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-01 13:14:47.501846 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:47.501852 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-01 13:14:47.501859 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:14:47.501866 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-01 13:14:47.501873 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:14:47.501879 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-11-01 13:14:47.501886 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:14:47.501893 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-11-01 13:14:47.501900 | orchestrator | 2025-11-01 13:14:47.501906 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-11-01 13:14:47.501913 | orchestrator | Saturday 01 November 2025 13:12:19 +0000 (0:00:23.736) 0:01:38.854 ***** 2025-11-01 13:14:47.501919 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-01 13:14:47.501926 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:47.501933 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-01 13:14:47.501940 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:47.501946 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-01 13:14:47.501953 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:47.501961 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-01 13:14:47.501968 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:14:47.501976 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-01 13:14:47.501983 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:14:47.501991 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-11-01 13:14:47.501998 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:14:47.502006 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-11-01 13:14:47.502013 | orchestrator | 2025-11-01 13:14:47.502063 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-11-01 13:14:47.502071 | orchestrator | Saturday 01 November 2025 13:12:24 +0000 (0:00:04.555) 0:01:43.410 ***** 2025-11-01 13:14:47.502078 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-01 13:14:47.502090 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-01 13:14:47.502098 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-01 13:14:47.502105 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:47.502113 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:47.502121 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:47.502134 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-11-01 13:14:47.502142 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-01 13:14:47.502149 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:14:47.502157 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-01 13:14:47.502164 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:14:47.502172 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-11-01 13:14:47.502179 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:14:47.502187 | orchestrator | 2025-11-01 13:14:47.502194 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-11-01 13:14:47.502205 | orchestrator | Saturday 01 November 2025 13:12:27 +0000 (0:00:03.528) 0:01:46.938 ***** 2025-11-01 13:14:47.502213 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 13:14:47.502220 | orchestrator | 2025-11-01 13:14:47.502228 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-11-01 13:14:47.502235 | orchestrator | Saturday 01 November 2025 13:12:29 +0000 (0:00:01.472) 0:01:48.410 ***** 2025-11-01 13:14:47.502243 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:14:47.502250 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:47.502258 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:47.502266 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:47.502273 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:14:47.502294 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:14:47.502301 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:14:47.502307 | orchestrator | 2025-11-01 13:14:47.502314 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-11-01 13:14:47.502320 | orchestrator | Saturday 01 November 2025 13:12:30 +0000 (0:00:01.191) 0:01:49.601 ***** 2025-11-01 13:14:47.502327 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:14:47.502334 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:14:47.502340 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:14:47.502347 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:14:47.502353 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:14:47.502360 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:14:47.502366 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:14:47.502373 | orchestrator | 2025-11-01 13:14:47.502379 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-11-01 13:14:47.502386 | orchestrator | Saturday 01 November 2025 13:12:34 +0000 (0:00:03.742) 0:01:53.344 ***** 2025-11-01 13:14:47.502393 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-01 13:14:47.502400 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-01 13:14:47.502407 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:14:47.502414 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:47.502420 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-01 13:14:47.502427 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:47.502433 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-01 13:14:47.502440 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:47.502447 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-01 13:14:47.502453 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:14:47.502460 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-01 13:14:47.502467 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:14:47.502473 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-11-01 13:14:47.502485 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:14:47.502491 | orchestrator | 2025-11-01 13:14:47.502498 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-11-01 13:14:47.502504 | orchestrator | Saturday 01 November 2025 13:12:37 +0000 (0:00:03.237) 0:01:56.582 ***** 2025-11-01 13:14:47.502511 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-01 13:14:47.502518 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:47.502524 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-01 13:14:47.502531 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:14:47.502538 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-01 13:14:47.502544 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:47.502551 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-01 13:14:47.502558 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:14:47.502568 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-01 13:14:47.502575 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:47.502581 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-11-01 13:14:47.502588 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-11-01 13:14:47.502595 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:14:47.502601 | orchestrator | 2025-11-01 13:14:47.502608 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-11-01 13:14:47.502615 | orchestrator | Saturday 01 November 2025 13:12:41 +0000 (0:00:04.321) 0:02:00.903 ***** 2025-11-01 13:14:47.502621 | orchestrator | [WARNING]: Skipped 2025-11-01 13:14:47.502628 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-11-01 13:14:47.502634 | orchestrator | due to this access issue: 2025-11-01 13:14:47.502641 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-11-01 13:14:47.502648 | orchestrator | not a directory 2025-11-01 13:14:47.502654 | orchestrator | ok: [testbed-manager -> localhost] 2025-11-01 13:14:47.502661 | orchestrator | 2025-11-01 13:14:47.502667 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-11-01 13:14:47.502674 | orchestrator | Saturday 01 November 2025 13:12:43 +0000 (0:00:01.534) 0:02:02.438 ***** 2025-11-01 13:14:47.502681 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:14:47.502687 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:47.502694 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:47.502704 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:47.502711 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:14:47.502717 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:14:47.502724 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:14:47.502731 | orchestrator | 2025-11-01 13:14:47.502737 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-11-01 13:14:47.502744 | orchestrator | Saturday 01 November 2025 13:12:45 +0000 (0:00:02.073) 0:02:04.512 ***** 2025-11-01 13:14:47.502751 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:14:47.502757 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:14:47.502764 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:14:47.502770 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:14:47.502777 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:14:47.502783 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:14:47.502790 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:14:47.502796 | orchestrator | 2025-11-01 13:14:47.502807 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-11-01 13:14:47.502814 | orchestrator | Saturday 01 November 2025 13:12:47 +0000 (0:00:02.090) 0:02:06.603 ***** 2025-11-01 13:14:47.502821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.502829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.502836 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-11-01 13:14:47.502847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.502854 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.502861 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.502871 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.502883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.502891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.502898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.502906 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.502916 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-11-01 13:14:47.502923 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.502930 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.502941 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.502956 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.502963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.502970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.502977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.502987 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.502994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.503005 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-11-01 13:14:47.503018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.503025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.503032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-11-01 13:14:47.503039 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.503049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.503056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.503063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-11-01 13:14:47.503075 | orchestrator | 2025-11-01 13:14:47.503085 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-11-01 13:14:47.503092 | orchestrator | Saturday 01 November 2025 13:12:53 +0000 (0:00:05.610) 0:02:12.213 ***** 2025-11-01 13:14:47.503099 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-11-01 13:14:47.503106 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:14:47.503113 | orchestrator | 2025-11-01 13:14:47.503119 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-01 13:14:47.503126 | orchestrator | Saturday 01 November 2025 13:12:55 +0000 (0:00:02.473) 0:02:14.687 ***** 2025-11-01 13:14:47.503132 | orchestrator | 2025-11-01 13:14:47.503139 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-01 13:14:47.503146 | orchestrator | Saturday 01 November 2025 13:12:55 +0000 (0:00:00.172) 0:02:14.859 ***** 2025-11-01 13:14:47.503152 | orchestrator | 2025-11-01 13:14:47.503159 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-01 13:14:47.503165 | orchestrator | Saturday 01 November 2025 13:12:56 +0000 (0:00:00.152) 0:02:15.011 ***** 2025-11-01 13:14:47.503172 | orchestrator | 2025-11-01 13:14:47.503179 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-01 13:14:47.503185 | orchestrator | Saturday 01 November 2025 13:12:56 +0000 (0:00:00.074) 0:02:15.086 ***** 2025-11-01 13:14:47.503192 | orchestrator | 2025-11-01 13:14:47.503198 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-01 13:14:47.503205 | orchestrator | Saturday 01 November 2025 13:12:56 +0000 (0:00:00.407) 0:02:15.493 ***** 2025-11-01 13:14:47.503211 | orchestrator | 2025-11-01 13:14:47.503218 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-01 13:14:47.503224 | orchestrator | Saturday 01 November 2025 13:12:56 +0000 (0:00:00.087) 0:02:15.581 ***** 2025-11-01 13:14:47.503231 | orchestrator | 2025-11-01 13:14:47.503237 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-11-01 13:14:47.503244 | orchestrator | Saturday 01 November 2025 13:12:56 +0000 (0:00:00.082) 0:02:15.663 ***** 2025-11-01 13:14:47.503250 | orchestrator | 2025-11-01 13:14:47.503257 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-11-01 13:14:47.503264 | orchestrator | Saturday 01 November 2025 13:12:56 +0000 (0:00:00.108) 0:02:15.771 ***** 2025-11-01 13:14:47.503270 | orchestrator | changed: [testbed-manager] 2025-11-01 13:14:47.503277 | orchestrator | 2025-11-01 13:14:47.503294 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-11-01 13:14:47.503301 | orchestrator | Saturday 01 November 2025 13:13:15 +0000 (0:00:18.959) 0:02:34.730 ***** 2025-11-01 13:14:47.503307 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:14:47.503314 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:14:47.503321 | orchestrator | changed: [testbed-manager] 2025-11-01 13:14:47.503327 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:14:47.503334 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:14:47.503340 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:14:47.503347 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:14:47.503353 | orchestrator | 2025-11-01 13:14:47.503360 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-11-01 13:14:47.503366 | orchestrator | Saturday 01 November 2025 13:13:32 +0000 (0:00:17.048) 0:02:51.779 ***** 2025-11-01 13:14:47.503373 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:14:47.503379 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:14:47.503386 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:14:47.503392 | orchestrator | 2025-11-01 13:14:47.503399 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-11-01 13:14:47.503410 | orchestrator | Saturday 01 November 2025 13:13:42 +0000 (0:00:10.041) 0:03:01.820 ***** 2025-11-01 13:14:47.503417 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:14:47.503423 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:14:47.503430 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:14:47.503436 | orchestrator | 2025-11-01 13:14:47.503443 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-11-01 13:14:47.503449 | orchestrator | Saturday 01 November 2025 13:13:53 +0000 (0:00:10.345) 0:03:12.166 ***** 2025-11-01 13:14:47.503456 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:14:47.503462 | orchestrator | changed: [testbed-manager] 2025-11-01 13:14:47.503469 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:14:47.503475 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:14:47.503485 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:14:47.503492 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:14:47.503499 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:14:47.503505 | orchestrator | 2025-11-01 13:14:47.503512 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-11-01 13:14:47.503518 | orchestrator | Saturday 01 November 2025 13:14:05 +0000 (0:00:12.423) 0:03:24.589 ***** 2025-11-01 13:14:47.503525 | orchestrator | changed: [testbed-manager] 2025-11-01 13:14:47.503531 | orchestrator | 2025-11-01 13:14:47.503538 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-11-01 13:14:47.503545 | orchestrator | Saturday 01 November 2025 13:14:14 +0000 (0:00:08.956) 0:03:33.546 ***** 2025-11-01 13:14:47.503551 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:14:47.503558 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:14:47.503564 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:14:47.503571 | orchestrator | 2025-11-01 13:14:47.503577 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-11-01 13:14:47.503584 | orchestrator | Saturday 01 November 2025 13:14:21 +0000 (0:00:07.377) 0:03:40.923 ***** 2025-11-01 13:14:47.503591 | orchestrator | changed: [testbed-manager] 2025-11-01 13:14:47.503597 | orchestrator | 2025-11-01 13:14:47.503604 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-11-01 13:14:47.503610 | orchestrator | Saturday 01 November 2025 13:14:33 +0000 (0:00:11.198) 0:03:52.122 ***** 2025-11-01 13:14:47.503617 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:14:47.503623 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:14:47.503630 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:14:47.503637 | orchestrator | 2025-11-01 13:14:47.503643 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:14:47.503653 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-11-01 13:14:47.503661 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-01 13:14:47.503667 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-01 13:14:47.503674 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-01 13:14:47.503681 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-11-01 13:14:47.503687 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-11-01 13:14:47.503694 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-11-01 13:14:47.503700 | orchestrator | 2025-11-01 13:14:47.503707 | orchestrator | 2025-11-01 13:14:47.503718 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:14:47.503724 | orchestrator | Saturday 01 November 2025 13:14:44 +0000 (0:00:11.608) 0:04:03.731 ***** 2025-11-01 13:14:47.503731 | orchestrator | =============================================================================== 2025-11-01 13:14:47.503738 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 42.08s 2025-11-01 13:14:47.503744 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 23.74s 2025-11-01 13:14:47.503751 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.96s 2025-11-01 13:14:47.503757 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 17.05s 2025-11-01 13:14:47.503764 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 12.42s 2025-11-01 13:14:47.503770 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.61s 2025-11-01 13:14:47.503777 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 11.20s 2025-11-01 13:14:47.503783 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.35s 2025-11-01 13:14:47.503790 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.04s 2025-11-01 13:14:47.503796 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.96s 2025-11-01 13:14:47.503803 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 7.38s 2025-11-01 13:14:47.503809 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 7.08s 2025-11-01 13:14:47.503816 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.63s 2025-11-01 13:14:47.503822 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.61s 2025-11-01 13:14:47.503829 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.56s 2025-11-01 13:14:47.503835 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 4.32s 2025-11-01 13:14:47.503842 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.74s 2025-11-01 13:14:47.503848 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.60s 2025-11-01 13:14:47.503855 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.53s 2025-11-01 13:14:47.503861 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 3.33s 2025-11-01 13:14:47.503871 | orchestrator | 2025-11-01 13:14:47 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:14:47.503878 | orchestrator | 2025-11-01 13:14:47 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:14:47.503884 | orchestrator | 2025-11-01 13:14:47 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:14:47.503891 | orchestrator | 2025-11-01 13:14:47 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:14:47.503898 | orchestrator | 2025-11-01 13:14:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:14:50.542444 | orchestrator | 2025-11-01 13:14:50 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:14:50.544109 | orchestrator | 2025-11-01 13:14:50 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:14:50.546123 | orchestrator | 2025-11-01 13:14:50 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:14:50.548256 | orchestrator | 2025-11-01 13:14:50 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:14:50.548324 | orchestrator | 2025-11-01 13:14:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:14:53.592915 | orchestrator | 2025-11-01 13:14:53 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:14:53.593427 | orchestrator | 2025-11-01 13:14:53 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:14:53.594944 | orchestrator | 2025-11-01 13:14:53 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:14:53.596159 | orchestrator | 2025-11-01 13:14:53 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:14:53.596180 | orchestrator | 2025-11-01 13:14:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:14:56.643878 | orchestrator | 2025-11-01 13:14:56 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:14:56.646341 | orchestrator | 2025-11-01 13:14:56 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:14:56.650835 | orchestrator | 2025-11-01 13:14:56 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:14:56.654785 | orchestrator | 2025-11-01 13:14:56 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:14:56.655745 | orchestrator | 2025-11-01 13:14:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:14:59.696115 | orchestrator | 2025-11-01 13:14:59 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:14:59.697979 | orchestrator | 2025-11-01 13:14:59 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:14:59.699728 | orchestrator | 2025-11-01 13:14:59 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:14:59.701046 | orchestrator | 2025-11-01 13:14:59 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:14:59.701068 | orchestrator | 2025-11-01 13:14:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:15:02.733616 | orchestrator | 2025-11-01 13:15:02 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:15:02.734359 | orchestrator | 2025-11-01 13:15:02 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:15:02.735525 | orchestrator | 2025-11-01 13:15:02 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:15:02.736924 | orchestrator | 2025-11-01 13:15:02 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:15:02.737716 | orchestrator | 2025-11-01 13:15:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:15:05.773405 | orchestrator | 2025-11-01 13:15:05 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:15:05.777372 | orchestrator | 2025-11-01 13:15:05 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:15:05.778850 | orchestrator | 2025-11-01 13:15:05 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:15:05.780662 | orchestrator | 2025-11-01 13:15:05 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:15:05.780681 | orchestrator | 2025-11-01 13:15:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:15:08.830424 | orchestrator | 2025-11-01 13:15:08 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:15:08.832160 | orchestrator | 2025-11-01 13:15:08 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:15:08.834770 | orchestrator | 2025-11-01 13:15:08 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:15:08.836048 | orchestrator | 2025-11-01 13:15:08 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:15:08.836069 | orchestrator | 2025-11-01 13:15:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:15:11.877191 | orchestrator | 2025-11-01 13:15:11 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:15:11.878967 | orchestrator | 2025-11-01 13:15:11 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:15:11.880837 | orchestrator | 2025-11-01 13:15:11 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:15:11.882732 | orchestrator | 2025-11-01 13:15:11 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:15:11.882755 | orchestrator | 2025-11-01 13:15:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:15:14.917971 | orchestrator | 2025-11-01 13:15:14 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:15:14.919224 | orchestrator | 2025-11-01 13:15:14 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:15:14.922976 | orchestrator | 2025-11-01 13:15:14 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:15:14.925494 | orchestrator | 2025-11-01 13:15:14 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:15:14.925516 | orchestrator | 2025-11-01 13:15:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:15:17.964633 | orchestrator | 2025-11-01 13:15:17 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:15:17.968074 | orchestrator | 2025-11-01 13:15:17 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:15:17.970475 | orchestrator | 2025-11-01 13:15:17 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:15:17.975046 | orchestrator | 2025-11-01 13:15:17 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:15:17.975069 | orchestrator | 2025-11-01 13:15:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:15:21.021380 | orchestrator | 2025-11-01 13:15:21 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:15:21.023483 | orchestrator | 2025-11-01 13:15:21 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:15:21.027325 | orchestrator | 2025-11-01 13:15:21 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:15:21.029298 | orchestrator | 2025-11-01 13:15:21 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:15:21.029310 | orchestrator | 2025-11-01 13:15:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:15:24.076510 | orchestrator | 2025-11-01 13:15:24 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:15:24.080180 | orchestrator | 2025-11-01 13:15:24 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:15:24.081419 | orchestrator | 2025-11-01 13:15:24 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:15:24.082344 | orchestrator | 2025-11-01 13:15:24 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:15:24.082379 | orchestrator | 2025-11-01 13:15:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:15:27.125094 | orchestrator | 2025-11-01 13:15:27 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:15:27.128041 | orchestrator | 2025-11-01 13:15:27 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:15:27.133184 | orchestrator | 2025-11-01 13:15:27 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:15:27.135280 | orchestrator | 2025-11-01 13:15:27 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:15:27.135827 | orchestrator | 2025-11-01 13:15:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:15:30.175160 | orchestrator | 2025-11-01 13:15:30 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:15:30.176256 | orchestrator | 2025-11-01 13:15:30 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:15:30.177632 | orchestrator | 2025-11-01 13:15:30 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:15:30.178372 | orchestrator | 2025-11-01 13:15:30 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:15:30.178530 | orchestrator | 2025-11-01 13:15:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:15:33.217833 | orchestrator | 2025-11-01 13:15:33 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:15:33.217921 | orchestrator | 2025-11-01 13:15:33 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:15:33.217935 | orchestrator | 2025-11-01 13:15:33 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:15:33.217946 | orchestrator | 2025-11-01 13:15:33 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:15:33.217958 | orchestrator | 2025-11-01 13:15:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:15:36.267614 | orchestrator | 2025-11-01 13:15:36 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:15:36.269159 | orchestrator | 2025-11-01 13:15:36 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:15:36.270219 | orchestrator | 2025-11-01 13:15:36 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:15:36.271277 | orchestrator | 2025-11-01 13:15:36 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:15:36.271338 | orchestrator | 2025-11-01 13:15:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:15:39.312186 | orchestrator | 2025-11-01 13:15:39 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:15:39.313202 | orchestrator | 2025-11-01 13:15:39 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:15:39.314861 | orchestrator | 2025-11-01 13:15:39 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:15:39.316501 | orchestrator | 2025-11-01 13:15:39 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:15:39.316523 | orchestrator | 2025-11-01 13:15:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:15:42.354681 | orchestrator | 2025-11-01 13:15:42 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:15:42.355418 | orchestrator | 2025-11-01 13:15:42 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:15:42.356453 | orchestrator | 2025-11-01 13:15:42 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:15:42.357402 | orchestrator | 2025-11-01 13:15:42 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:15:42.357428 | orchestrator | 2025-11-01 13:15:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:15:45.423093 | orchestrator | 2025-11-01 13:15:45 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:15:45.424062 | orchestrator | 2025-11-01 13:15:45 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:15:45.425563 | orchestrator | 2025-11-01 13:15:45 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:15:45.427004 | orchestrator | 2025-11-01 13:15:45 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:15:45.427030 | orchestrator | 2025-11-01 13:15:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:15:48.470244 | orchestrator | 2025-11-01 13:15:48 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:15:48.470756 | orchestrator | 2025-11-01 13:15:48 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:15:48.471817 | orchestrator | 2025-11-01 13:15:48 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:15:48.472670 | orchestrator | 2025-11-01 13:15:48 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:15:48.472821 | orchestrator | 2025-11-01 13:15:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:15:51.519405 | orchestrator | 2025-11-01 13:15:51 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:15:51.520208 | orchestrator | 2025-11-01 13:15:51 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:15:51.521444 | orchestrator | 2025-11-01 13:15:51 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:15:51.522240 | orchestrator | 2025-11-01 13:15:51 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:15:51.522272 | orchestrator | 2025-11-01 13:15:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:15:54.560570 | orchestrator | 2025-11-01 13:15:54 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:15:54.561406 | orchestrator | 2025-11-01 13:15:54 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:15:54.562526 | orchestrator | 2025-11-01 13:15:54 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:15:54.563499 | orchestrator | 2025-11-01 13:15:54 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:15:54.565723 | orchestrator | 2025-11-01 13:15:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:15:57.610635 | orchestrator | 2025-11-01 13:15:57 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:15:57.612823 | orchestrator | 2025-11-01 13:15:57 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:15:57.615011 | orchestrator | 2025-11-01 13:15:57 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:15:57.620347 | orchestrator | 2025-11-01 13:15:57 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:15:57.620367 | orchestrator | 2025-11-01 13:15:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:16:00.665431 | orchestrator | 2025-11-01 13:16:00 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:16:00.666858 | orchestrator | 2025-11-01 13:16:00 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:16:00.669917 | orchestrator | 2025-11-01 13:16:00 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:16:00.671575 | orchestrator | 2025-11-01 13:16:00 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:16:00.671905 | orchestrator | 2025-11-01 13:16:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:16:03.707406 | orchestrator | 2025-11-01 13:16:03 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:16:03.708061 | orchestrator | 2025-11-01 13:16:03 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:16:03.708885 | orchestrator | 2025-11-01 13:16:03 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state STARTED 2025-11-01 13:16:03.709508 | orchestrator | 2025-11-01 13:16:03 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:16:03.709531 | orchestrator | 2025-11-01 13:16:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:16:06.812515 | orchestrator | 2025-11-01 13:16:06 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:16:06.813285 | orchestrator | 2025-11-01 13:16:06 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:16:06.814164 | orchestrator | 2025-11-01 13:16:06 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:16:06.816502 | orchestrator | 2025-11-01 13:16:06 | INFO  | Task 6e094866-7a0c-4eef-a0d5-79c1e0897b13 is in state SUCCESS 2025-11-01 13:16:06.818246 | orchestrator | 2025-11-01 13:16:06.818278 | orchestrator | 2025-11-01 13:16:06.818318 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:16:06.818331 | orchestrator | 2025-11-01 13:16:06.818342 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:16:06.818353 | orchestrator | Saturday 01 November 2025 13:11:33 +0000 (0:00:01.323) 0:00:01.323 ***** 2025-11-01 13:16:06.818364 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:16:06.818513 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:16:06.818525 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:16:06.818535 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:16:06.818546 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:16:06.818557 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:16:06.818567 | orchestrator | 2025-11-01 13:16:06.818611 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:16:06.818623 | orchestrator | Saturday 01 November 2025 13:11:35 +0000 (0:00:01.958) 0:00:03.282 ***** 2025-11-01 13:16:06.818634 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-11-01 13:16:06.818645 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-11-01 13:16:06.818656 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-11-01 13:16:06.818667 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-11-01 13:16:06.818677 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-11-01 13:16:06.818688 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-11-01 13:16:06.818699 | orchestrator | 2025-11-01 13:16:06.818721 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-11-01 13:16:06.818732 | orchestrator | 2025-11-01 13:16:06.818743 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-11-01 13:16:06.818754 | orchestrator | Saturday 01 November 2025 13:11:36 +0000 (0:00:00.952) 0:00:04.234 ***** 2025-11-01 13:16:06.818765 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:16:06.818777 | orchestrator | 2025-11-01 13:16:06.818788 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-11-01 13:16:06.818799 | orchestrator | Saturday 01 November 2025 13:11:37 +0000 (0:00:01.482) 0:00:05.716 ***** 2025-11-01 13:16:06.818810 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-11-01 13:16:06.818821 | orchestrator | 2025-11-01 13:16:06.818831 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-11-01 13:16:06.818863 | orchestrator | Saturday 01 November 2025 13:11:41 +0000 (0:00:03.964) 0:00:09.681 ***** 2025-11-01 13:16:06.818902 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-11-01 13:16:06.818916 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-11-01 13:16:06.818929 | orchestrator | 2025-11-01 13:16:06.818942 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-11-01 13:16:06.818969 | orchestrator | Saturday 01 November 2025 13:11:49 +0000 (0:00:07.415) 0:00:17.097 ***** 2025-11-01 13:16:06.818982 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-01 13:16:06.818994 | orchestrator | 2025-11-01 13:16:06.819007 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-11-01 13:16:06.819019 | orchestrator | Saturday 01 November 2025 13:11:52 +0000 (0:00:03.521) 0:00:20.619 ***** 2025-11-01 13:16:06.819031 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-01 13:16:06.819044 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-11-01 13:16:06.819104 | orchestrator | 2025-11-01 13:16:06.819117 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-11-01 13:16:06.819150 | orchestrator | Saturday 01 November 2025 13:11:57 +0000 (0:00:04.510) 0:00:25.130 ***** 2025-11-01 13:16:06.819163 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-01 13:16:06.819176 | orchestrator | 2025-11-01 13:16:06.819188 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-11-01 13:16:06.819201 | orchestrator | Saturday 01 November 2025 13:12:01 +0000 (0:00:04.109) 0:00:29.239 ***** 2025-11-01 13:16:06.819213 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-11-01 13:16:06.819224 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-11-01 13:16:06.819235 | orchestrator | 2025-11-01 13:16:06.819245 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-11-01 13:16:06.819256 | orchestrator | Saturday 01 November 2025 13:12:10 +0000 (0:00:09.439) 0:00:38.679 ***** 2025-11-01 13:16:06.819271 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.819336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 13:16:06.819354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 13:16:06.819383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 13:16:06.819396 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.819410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.819431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.819443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.819462 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.819479 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.819491 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.819503 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.819515 | orchestrator | 2025-11-01 13:16:06.819532 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-11-01 13:16:06.819544 | orchestrator | Saturday 01 November 2025 13:12:14 +0000 (0:00:03.260) 0:00:41.939 ***** 2025-11-01 13:16:06.819555 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:16:06.819567 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:16:06.819578 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:16:06.819588 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:16:06.819599 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:16:06.819610 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:16:06.819621 | orchestrator | 2025-11-01 13:16:06.819633 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-11-01 13:16:06.819650 | orchestrator | Saturday 01 November 2025 13:12:14 +0000 (0:00:00.747) 0:00:42.687 ***** 2025-11-01 13:16:06.819661 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:16:06.819672 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:16:06.819682 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:16:06.819693 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:16:06.819704 | orchestrator | 2025-11-01 13:16:06.819715 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-11-01 13:16:06.819726 | orchestrator | Saturday 01 November 2025 13:12:16 +0000 (0:00:01.281) 0:00:43.969 ***** 2025-11-01 13:16:06.819737 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-11-01 13:16:06.819748 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-11-01 13:16:06.819759 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-11-01 13:16:06.819770 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-11-01 13:16:06.819780 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-11-01 13:16:06.819791 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-11-01 13:16:06.819802 | orchestrator | 2025-11-01 13:16:06.819813 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-11-01 13:16:06.819824 | orchestrator | Saturday 01 November 2025 13:12:18 +0000 (0:00:02.713) 0:00:46.682 ***** 2025-11-01 13:16:06.819842 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-01 13:16:06.819854 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-01 13:16:06.819867 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-01 13:16:06.819892 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-01 13:16:06.819904 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-01 13:16:06.819921 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-11-01 13:16:06.819934 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-01 13:16:06.819946 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-01 13:16:06.819971 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-01 13:16:06.819983 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-01 13:16:06.820000 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-01 13:16:06.820013 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-11-01 13:16:06.820024 | orchestrator | 2025-11-01 13:16:06.820035 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-11-01 13:16:06.820047 | orchestrator | Saturday 01 November 2025 13:12:23 +0000 (0:00:05.114) 0:00:51.796 ***** 2025-11-01 13:16:06.820058 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-11-01 13:16:06.820069 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-11-01 13:16:06.820080 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-11-01 13:16:06.820097 | orchestrator | 2025-11-01 13:16:06.820108 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-11-01 13:16:06.820119 | orchestrator | Saturday 01 November 2025 13:12:27 +0000 (0:00:03.528) 0:00:55.324 ***** 2025-11-01 13:16:06.820130 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-11-01 13:16:06.820141 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-11-01 13:16:06.820151 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-11-01 13:16:06.820162 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-11-01 13:16:06.820173 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-11-01 13:16:06.820189 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-11-01 13:16:06.820200 | orchestrator | 2025-11-01 13:16:06.820211 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-11-01 13:16:06.820222 | orchestrator | Saturday 01 November 2025 13:12:31 +0000 (0:00:04.062) 0:00:59.386 ***** 2025-11-01 13:16:06.820233 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-11-01 13:16:06.820244 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-11-01 13:16:06.820255 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-11-01 13:16:06.820266 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-11-01 13:16:06.820277 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-11-01 13:16:06.820287 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-11-01 13:16:06.820317 | orchestrator | 2025-11-01 13:16:06.820328 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-11-01 13:16:06.820339 | orchestrator | Saturday 01 November 2025 13:12:33 +0000 (0:00:01.820) 0:01:01.206 ***** 2025-11-01 13:16:06.820350 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:16:06.820361 | orchestrator | 2025-11-01 13:16:06.820371 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-11-01 13:16:06.820382 | orchestrator | Saturday 01 November 2025 13:12:33 +0000 (0:00:00.169) 0:01:01.376 ***** 2025-11-01 13:16:06.820393 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:16:06.820404 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:16:06.820415 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:16:06.820426 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:16:06.820436 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:16:06.820447 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:16:06.820458 | orchestrator | 2025-11-01 13:16:06.820468 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-11-01 13:16:06.820479 | orchestrator | Saturday 01 November 2025 13:12:35 +0000 (0:00:01.451) 0:01:02.828 ***** 2025-11-01 13:16:06.820491 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:16:06.820503 | orchestrator | 2025-11-01 13:16:06.820518 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-11-01 13:16:06.820529 | orchestrator | Saturday 01 November 2025 13:12:37 +0000 (0:00:02.360) 0:01:05.191 ***** 2025-11-01 13:16:06.820552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 13:16:06.820573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 13:16:06.820591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 13:16:06.820603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.820615 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.820632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.820663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.820674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.821190 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.821211 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.821224 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.821243 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.821264 | orchestrator | 2025-11-01 13:16:06.821276 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-11-01 13:16:06.821287 | orchestrator | Saturday 01 November 2025 13:12:42 +0000 (0:00:05.543) 0:01:10.735 ***** 2025-11-01 13:16:06.821355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 13:16:06.821376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.821388 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:16:06.821399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 13:16:06.821411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.821422 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:16:06.821439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 13:16:06.821458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.821469 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:16:06.821481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.821498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.821510 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:16:06.821521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.821539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.821561 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:16:06.821572 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.821584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.821595 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:16:06.821606 | orchestrator | 2025-11-01 13:16:06.821617 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-11-01 13:16:06.821628 | orchestrator | Saturday 01 November 2025 13:12:45 +0000 (0:00:02.383) 0:01:13.118 ***** 2025-11-01 13:16:06.821645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 13:16:06.821657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.821685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 13:16:06.821697 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:16:06.821708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.821720 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:16:06.821731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 13:16:06.821750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.821761 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:16:06.821773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.821791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.821805 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:16:06.821823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.821836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.821849 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:16:06.821867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.821880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.821897 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:16:06.821909 | orchestrator | 2025-11-01 13:16:06.821920 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-11-01 13:16:06.821931 | orchestrator | Saturday 01 November 2025 13:12:49 +0000 (0:00:04.160) 0:01:17.278 ***** 2025-11-01 13:16:06.821943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 13:16:06.821962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 13:16:06.821975 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.821994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 13:16:06.822006 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.822064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.822083 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.822095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.822112 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.822124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.822142 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.822153 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.822163 | orchestrator | 2025-11-01 13:16:06.822173 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-11-01 13:16:06.822187 | orchestrator | Saturday 01 November 2025 13:12:54 +0000 (0:00:04.596) 0:01:21.875 ***** 2025-11-01 13:16:06.822197 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-11-01 13:16:06.822207 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:16:06.822217 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-11-01 13:16:06.822227 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-11-01 13:16:06.822236 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:16:06.822246 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-11-01 13:16:06.822255 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-11-01 13:16:06.822265 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-11-01 13:16:06.822275 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:16:06.822284 | orchestrator | 2025-11-01 13:16:06.822312 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-11-01 13:16:06.822322 | orchestrator | Saturday 01 November 2025 13:12:57 +0000 (0:00:03.352) 0:01:25.227 ***** 2025-11-01 13:16:06.822332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 13:16:06.822348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 13:16:06.822364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 13:16:06.822379 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.822390 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.822405 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.822421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.822432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.822442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.822456 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.822467 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.822477 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.822492 | orchestrator | 2025-11-01 13:16:06.822502 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-11-01 13:16:06.822512 | orchestrator | Saturday 01 November 2025 13:13:13 +0000 (0:00:15.647) 0:01:40.875 ***** 2025-11-01 13:16:06.822527 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:16:06.822537 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:16:06.822546 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:16:06.822556 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:16:06.822565 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:16:06.822575 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:16:06.822584 | orchestrator | 2025-11-01 13:16:06.822594 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-11-01 13:16:06.822603 | orchestrator | Saturday 01 November 2025 13:13:15 +0000 (0:00:02.923) 0:01:43.799 ***** 2025-11-01 13:16:06.822613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 13:16:06.822624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.822634 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:16:06.822648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 13:16:06.822659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.822675 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:16:06.822690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-11-01 13:16:06.822701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.822711 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:16:06.822721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.822735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.822746 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:16:06.822756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.822774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.822784 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:16:06.822799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.822810 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-11-01 13:16:06.822820 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:16:06.822830 | orchestrator | 2025-11-01 13:16:06.822840 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-11-01 13:16:06.822849 | orchestrator | Saturday 01 November 2025 13:13:20 +0000 (0:00:04.288) 0:01:48.087 ***** 2025-11-01 13:16:06.822859 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:16:06.822869 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:16:06.822878 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:16:06.822888 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:16:06.822898 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:16:06.822907 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:16:06.822916 | orchestrator | 2025-11-01 13:16:06.822926 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-11-01 13:16:06.822936 | orchestrator | Saturday 01 November 2025 13:13:22 +0000 (0:00:02.101) 0:01:50.188 ***** 2025-11-01 13:16:06.822950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 13:16:06.822969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 13:16:06.822986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-11-01 13:16:06.822996 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.823011 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.823022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.823038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.823054 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.823064 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.823075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.823089 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.823106 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-11-01 13:16:06.823116 | orchestrator | 2025-11-01 13:16:06.823126 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-11-01 13:16:06.823136 | orchestrator | Saturday 01 November 2025 13:13:27 +0000 (0:00:04.916) 0:01:55.105 ***** 2025-11-01 13:16:06.823146 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:16:06.823155 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:16:06.823165 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:16:06.823174 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:16:06.823184 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:16:06.823193 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:16:06.823202 | orchestrator | 2025-11-01 13:16:06.823212 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-11-01 13:16:06.823222 | orchestrator | Saturday 01 November 2025 13:13:27 +0000 (0:00:00.657) 0:01:55.762 ***** 2025-11-01 13:16:06.823231 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:16:06.823240 | orchestrator | 2025-11-01 13:16:06.823250 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-11-01 13:16:06.823260 | orchestrator | Saturday 01 November 2025 13:13:30 +0000 (0:00:02.974) 0:01:58.737 ***** 2025-11-01 13:16:06.823269 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:16:06.823279 | orchestrator | 2025-11-01 13:16:06.823330 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-11-01 13:16:06.823343 | orchestrator | Saturday 01 November 2025 13:13:33 +0000 (0:00:02.536) 0:02:01.274 ***** 2025-11-01 13:16:06.823352 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:16:06.823362 | orchestrator | 2025-11-01 13:16:06.823372 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-01 13:16:06.823381 | orchestrator | Saturday 01 November 2025 13:13:53 +0000 (0:00:20.483) 0:02:21.757 ***** 2025-11-01 13:16:06.823391 | orchestrator | 2025-11-01 13:16:06.823407 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-01 13:16:06.823417 | orchestrator | Saturday 01 November 2025 13:13:54 +0000 (0:00:00.111) 0:02:21.869 ***** 2025-11-01 13:16:06.823426 | orchestrator | 2025-11-01 13:16:06.823436 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-01 13:16:06.823446 | orchestrator | Saturday 01 November 2025 13:13:54 +0000 (0:00:00.225) 0:02:22.094 ***** 2025-11-01 13:16:06.823455 | orchestrator | 2025-11-01 13:16:06.823465 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-01 13:16:06.823474 | orchestrator | Saturday 01 November 2025 13:13:54 +0000 (0:00:00.239) 0:02:22.334 ***** 2025-11-01 13:16:06.823484 | orchestrator | 2025-11-01 13:16:06.823494 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-01 13:16:06.823503 | orchestrator | Saturday 01 November 2025 13:13:54 +0000 (0:00:00.185) 0:02:22.520 ***** 2025-11-01 13:16:06.823513 | orchestrator | 2025-11-01 13:16:06.823522 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-11-01 13:16:06.823532 | orchestrator | Saturday 01 November 2025 13:13:54 +0000 (0:00:00.209) 0:02:22.730 ***** 2025-11-01 13:16:06.823541 | orchestrator | 2025-11-01 13:16:06.823551 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-11-01 13:16:06.823560 | orchestrator | Saturday 01 November 2025 13:13:55 +0000 (0:00:00.293) 0:02:23.023 ***** 2025-11-01 13:16:06.823576 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:16:06.823586 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:16:06.823595 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:16:06.823605 | orchestrator | 2025-11-01 13:16:06.823614 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-11-01 13:16:06.823624 | orchestrator | Saturday 01 November 2025 13:14:25 +0000 (0:00:30.088) 0:02:53.112 ***** 2025-11-01 13:16:06.823633 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:16:06.823643 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:16:06.823653 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:16:06.823662 | orchestrator | 2025-11-01 13:16:06.823672 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-11-01 13:16:06.823681 | orchestrator | Saturday 01 November 2025 13:14:37 +0000 (0:00:12.428) 0:03:05.540 ***** 2025-11-01 13:16:06.823691 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:16:06.823701 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:16:06.823710 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:16:06.823720 | orchestrator | 2025-11-01 13:16:06.823729 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-11-01 13:16:06.823739 | orchestrator | Saturday 01 November 2025 13:15:47 +0000 (0:01:09.641) 0:04:15.182 ***** 2025-11-01 13:16:06.823748 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:16:06.823758 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:16:06.823767 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:16:06.823777 | orchestrator | 2025-11-01 13:16:06.823787 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-11-01 13:16:06.823796 | orchestrator | Saturday 01 November 2025 13:16:02 +0000 (0:00:15.327) 0:04:30.509 ***** 2025-11-01 13:16:06.823806 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:16:06.823815 | orchestrator | 2025-11-01 13:16:06.823830 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:16:06.823840 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-11-01 13:16:06.823850 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-11-01 13:16:06.823858 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-11-01 13:16:06.823866 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-11-01 13:16:06.823874 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-11-01 13:16:06.823882 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-11-01 13:16:06.823890 | orchestrator | 2025-11-01 13:16:06.823898 | orchestrator | 2025-11-01 13:16:06.823906 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:16:06.823914 | orchestrator | Saturday 01 November 2025 13:16:04 +0000 (0:00:01.342) 0:04:31.852 ***** 2025-11-01 13:16:06.823921 | orchestrator | =============================================================================== 2025-11-01 13:16:06.823929 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 69.64s 2025-11-01 13:16:06.823937 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 30.09s 2025-11-01 13:16:06.823945 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.48s 2025-11-01 13:16:06.823953 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 15.65s 2025-11-01 13:16:06.823961 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 15.33s 2025-11-01 13:16:06.823973 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 12.43s 2025-11-01 13:16:06.823981 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 9.44s 2025-11-01 13:16:06.823989 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.42s 2025-11-01 13:16:06.824001 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.54s 2025-11-01 13:16:06.824009 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.11s 2025-11-01 13:16:06.824017 | orchestrator | cinder : Check cinder containers ---------------------------------------- 4.92s 2025-11-01 13:16:06.824025 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.60s 2025-11-01 13:16:06.824032 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.51s 2025-11-01 13:16:06.824040 | orchestrator | cinder : Copying over existing policy file ------------------------------ 4.29s 2025-11-01 13:16:06.824048 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 4.16s 2025-11-01 13:16:06.824056 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 4.11s 2025-11-01 13:16:06.824064 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 4.06s 2025-11-01 13:16:06.824071 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.96s 2025-11-01 13:16:06.824079 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 3.53s 2025-11-01 13:16:06.824087 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.52s 2025-11-01 13:16:06.824095 | orchestrator | 2025-11-01 13:16:06 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:16:06.824103 | orchestrator | 2025-11-01 13:16:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:16:09.863435 | orchestrator | 2025-11-01 13:16:09 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:16:09.864452 | orchestrator | 2025-11-01 13:16:09 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:16:09.865390 | orchestrator | 2025-11-01 13:16:09 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:16:09.870220 | orchestrator | 2025-11-01 13:16:09 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:16:09.870244 | orchestrator | 2025-11-01 13:16:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:16:12.925012 | orchestrator | 2025-11-01 13:16:12 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:16:12.925367 | orchestrator | 2025-11-01 13:16:12 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:16:12.928149 | orchestrator | 2025-11-01 13:16:12 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:16:12.928984 | orchestrator | 2025-11-01 13:16:12 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:16:12.929006 | orchestrator | 2025-11-01 13:16:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:16:15.973544 | orchestrator | 2025-11-01 13:16:15 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:16:15.974608 | orchestrator | 2025-11-01 13:16:15 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:16:15.976159 | orchestrator | 2025-11-01 13:16:15 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:16:15.978003 | orchestrator | 2025-11-01 13:16:15 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:16:15.978073 | orchestrator | 2025-11-01 13:16:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:16:19.028753 | orchestrator | 2025-11-01 13:16:19 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:16:19.028848 | orchestrator | 2025-11-01 13:16:19 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:16:19.029574 | orchestrator | 2025-11-01 13:16:19 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:16:19.030577 | orchestrator | 2025-11-01 13:16:19 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:16:19.030603 | orchestrator | 2025-11-01 13:16:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:16:22.107932 | orchestrator | 2025-11-01 13:16:22 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:16:22.108015 | orchestrator | 2025-11-01 13:16:22 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:16:22.108027 | orchestrator | 2025-11-01 13:16:22 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:16:22.108037 | orchestrator | 2025-11-01 13:16:22 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:16:22.108047 | orchestrator | 2025-11-01 13:16:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:16:25.113036 | orchestrator | 2025-11-01 13:16:25 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:16:25.113680 | orchestrator | 2025-11-01 13:16:25 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:16:25.115063 | orchestrator | 2025-11-01 13:16:25 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:16:25.116267 | orchestrator | 2025-11-01 13:16:25 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:16:25.116529 | orchestrator | 2025-11-01 13:16:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:16:28.152968 | orchestrator | 2025-11-01 13:16:28 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:16:28.153446 | orchestrator | 2025-11-01 13:16:28 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:16:28.154271 | orchestrator | 2025-11-01 13:16:28 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:16:28.155777 | orchestrator | 2025-11-01 13:16:28 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:16:28.155800 | orchestrator | 2025-11-01 13:16:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:16:31.189370 | orchestrator | 2025-11-01 13:16:31 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:16:31.189773 | orchestrator | 2025-11-01 13:16:31 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:16:31.190617 | orchestrator | 2025-11-01 13:16:31 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:16:31.191557 | orchestrator | 2025-11-01 13:16:31 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:16:31.191580 | orchestrator | 2025-11-01 13:16:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:16:34.237431 | orchestrator | 2025-11-01 13:16:34 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:16:34.237969 | orchestrator | 2025-11-01 13:16:34 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:16:34.238717 | orchestrator | 2025-11-01 13:16:34 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:16:34.239564 | orchestrator | 2025-11-01 13:16:34 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:16:34.239598 | orchestrator | 2025-11-01 13:16:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:16:37.281917 | orchestrator | 2025-11-01 13:16:37 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:16:37.283271 | orchestrator | 2025-11-01 13:16:37 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:16:37.285166 | orchestrator | 2025-11-01 13:16:37 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:16:37.286802 | orchestrator | 2025-11-01 13:16:37 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:16:37.286812 | orchestrator | 2025-11-01 13:16:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:16:40.381365 | orchestrator | 2025-11-01 13:16:40 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:16:40.382229 | orchestrator | 2025-11-01 13:16:40 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:16:40.383715 | orchestrator | 2025-11-01 13:16:40 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:16:40.385596 | orchestrator | 2025-11-01 13:16:40 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:16:40.385719 | orchestrator | 2025-11-01 13:16:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:16:43.423894 | orchestrator | 2025-11-01 13:16:43 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:16:43.424658 | orchestrator | 2025-11-01 13:16:43 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:16:43.425861 | orchestrator | 2025-11-01 13:16:43 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:16:43.428655 | orchestrator | 2025-11-01 13:16:43 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:16:43.428686 | orchestrator | 2025-11-01 13:16:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:16:46.475479 | orchestrator | 2025-11-01 13:16:46 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:16:46.476604 | orchestrator | 2025-11-01 13:16:46 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:16:46.477766 | orchestrator | 2025-11-01 13:16:46 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:16:46.478904 | orchestrator | 2025-11-01 13:16:46 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:16:46.478934 | orchestrator | 2025-11-01 13:16:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:16:49.520642 | orchestrator | 2025-11-01 13:16:49 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:16:49.524558 | orchestrator | 2025-11-01 13:16:49 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:16:49.524597 | orchestrator | 2025-11-01 13:16:49 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:16:49.525409 | orchestrator | 2025-11-01 13:16:49 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:16:49.525703 | orchestrator | 2025-11-01 13:16:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:16:52.559350 | orchestrator | 2025-11-01 13:16:52 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:16:52.559780 | orchestrator | 2025-11-01 13:16:52 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:16:52.560595 | orchestrator | 2025-11-01 13:16:52 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:16:52.561403 | orchestrator | 2025-11-01 13:16:52 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:16:52.561427 | orchestrator | 2025-11-01 13:16:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:16:55.603589 | orchestrator | 2025-11-01 13:16:55 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:16:55.604075 | orchestrator | 2025-11-01 13:16:55 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:16:55.606014 | orchestrator | 2025-11-01 13:16:55 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:16:55.607349 | orchestrator | 2025-11-01 13:16:55 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:16:55.607392 | orchestrator | 2025-11-01 13:16:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:16:58.647390 | orchestrator | 2025-11-01 13:16:58 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:16:58.648449 | orchestrator | 2025-11-01 13:16:58 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:16:58.650650 | orchestrator | 2025-11-01 13:16:58 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:16:58.652544 | orchestrator | 2025-11-01 13:16:58 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:16:58.652800 | orchestrator | 2025-11-01 13:16:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:17:01.689939 | orchestrator | 2025-11-01 13:17:01 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:17:01.690498 | orchestrator | 2025-11-01 13:17:01 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:17:01.691742 | orchestrator | 2025-11-01 13:17:01 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:17:01.692830 | orchestrator | 2025-11-01 13:17:01 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:17:01.692853 | orchestrator | 2025-11-01 13:17:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:17:04.721797 | orchestrator | 2025-11-01 13:17:04 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:17:04.990519 | orchestrator | 2025-11-01 13:17:04 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:17:04.990588 | orchestrator | 2025-11-01 13:17:04 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:17:04.990602 | orchestrator | 2025-11-01 13:17:04 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:17:04.990616 | orchestrator | 2025-11-01 13:17:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:17:07.764390 | orchestrator | 2025-11-01 13:17:07 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:17:07.765134 | orchestrator | 2025-11-01 13:17:07 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:17:07.767845 | orchestrator | 2025-11-01 13:17:07 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:17:07.768256 | orchestrator | 2025-11-01 13:17:07 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:17:07.768397 | orchestrator | 2025-11-01 13:17:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:17:10.809241 | orchestrator | 2025-11-01 13:17:10 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:17:10.809627 | orchestrator | 2025-11-01 13:17:10 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state STARTED 2025-11-01 13:17:10.810778 | orchestrator | 2025-11-01 13:17:10 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:17:10.811852 | orchestrator | 2025-11-01 13:17:10 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:17:10.811875 | orchestrator | 2025-11-01 13:17:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:17:13.842408 | orchestrator | 2025-11-01 13:17:13 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:17:13.843282 | orchestrator | 2025-11-01 13:17:13 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:17:13.846092 | orchestrator | 2025-11-01 13:17:13.846120 | orchestrator | 2025-11-01 13:17:13 | INFO  | Task c2eba0b6-f6ee-4e50-869d-88d2ef736fde is in state SUCCESS 2025-11-01 13:17:13.847491 | orchestrator | 2025-11-01 13:17:13.847528 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:17:13.847550 | orchestrator | 2025-11-01 13:17:13.847562 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:17:13.847574 | orchestrator | Saturday 01 November 2025 13:14:50 +0000 (0:00:00.321) 0:00:00.321 ***** 2025-11-01 13:17:13.847585 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:17:13.847597 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:17:13.847608 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:17:13.847618 | orchestrator | 2025-11-01 13:17:13.847629 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:17:13.847640 | orchestrator | Saturday 01 November 2025 13:14:51 +0000 (0:00:00.434) 0:00:00.755 ***** 2025-11-01 13:17:13.847688 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-11-01 13:17:13.847702 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-11-01 13:17:13.847713 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-11-01 13:17:13.847724 | orchestrator | 2025-11-01 13:17:13.847735 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-11-01 13:17:13.847746 | orchestrator | 2025-11-01 13:17:13.847771 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-11-01 13:17:13.847782 | orchestrator | Saturday 01 November 2025 13:14:52 +0000 (0:00:00.866) 0:00:01.622 ***** 2025-11-01 13:17:13.847793 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:17:13.847805 | orchestrator | 2025-11-01 13:17:13.847816 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-11-01 13:17:13.847826 | orchestrator | Saturday 01 November 2025 13:14:52 +0000 (0:00:00.596) 0:00:02.219 ***** 2025-11-01 13:17:13.847925 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-11-01 13:17:13.847936 | orchestrator | 2025-11-01 13:17:13.847947 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-11-01 13:17:13.847958 | orchestrator | Saturday 01 November 2025 13:14:56 +0000 (0:00:04.040) 0:00:06.259 ***** 2025-11-01 13:17:13.847969 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-11-01 13:17:13.847980 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-11-01 13:17:13.847990 | orchestrator | 2025-11-01 13:17:13.848001 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-11-01 13:17:13.848012 | orchestrator | Saturday 01 November 2025 13:15:04 +0000 (0:00:07.377) 0:00:13.637 ***** 2025-11-01 13:17:13.848023 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-01 13:17:13.848033 | orchestrator | 2025-11-01 13:17:13.848044 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-11-01 13:17:13.848073 | orchestrator | Saturday 01 November 2025 13:15:07 +0000 (0:00:03.918) 0:00:17.555 ***** 2025-11-01 13:17:13.848084 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-01 13:17:13.848097 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-11-01 13:17:13.848109 | orchestrator | 2025-11-01 13:17:13.848120 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-11-01 13:17:13.848132 | orchestrator | Saturday 01 November 2025 13:15:12 +0000 (0:00:04.617) 0:00:22.173 ***** 2025-11-01 13:17:13.848144 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-01 13:17:13.848156 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-11-01 13:17:13.848169 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-11-01 13:17:13.848181 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-11-01 13:17:13.848193 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-11-01 13:17:13.848205 | orchestrator | 2025-11-01 13:17:13.848218 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-11-01 13:17:13.848230 | orchestrator | Saturday 01 November 2025 13:15:31 +0000 (0:00:18.934) 0:00:41.107 ***** 2025-11-01 13:17:13.848242 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-11-01 13:17:13.848254 | orchestrator | 2025-11-01 13:17:13.848266 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-11-01 13:17:13.848278 | orchestrator | Saturday 01 November 2025 13:15:35 +0000 (0:00:04.421) 0:00:45.528 ***** 2025-11-01 13:17:13.848295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 13:17:13.848342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 13:17:13.848364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 13:17:13.848387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.848403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.848415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.848434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.848449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.848465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.848486 | orchestrator | 2025-11-01 13:17:13.848497 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-11-01 13:17:13.848508 | orchestrator | Saturday 01 November 2025 13:15:38 +0000 (0:00:02.142) 0:00:47.671 ***** 2025-11-01 13:17:13.848519 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-11-01 13:17:13.848529 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-11-01 13:17:13.848540 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-11-01 13:17:13.848550 | orchestrator | 2025-11-01 13:17:13.848562 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-11-01 13:17:13.848572 | orchestrator | Saturday 01 November 2025 13:15:39 +0000 (0:00:01.014) 0:00:48.685 ***** 2025-11-01 13:17:13.848583 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:17:13.848594 | orchestrator | 2025-11-01 13:17:13.848605 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-11-01 13:17:13.848615 | orchestrator | Saturday 01 November 2025 13:15:39 +0000 (0:00:00.143) 0:00:48.829 ***** 2025-11-01 13:17:13.848626 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:17:13.848636 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:17:13.848647 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:17:13.848658 | orchestrator | 2025-11-01 13:17:13.848669 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-11-01 13:17:13.848679 | orchestrator | Saturday 01 November 2025 13:15:39 +0000 (0:00:00.552) 0:00:49.382 ***** 2025-11-01 13:17:13.848690 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:17:13.848701 | orchestrator | 2025-11-01 13:17:13.848711 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-11-01 13:17:13.848722 | orchestrator | Saturday 01 November 2025 13:15:40 +0000 (0:00:00.612) 0:00:49.994 ***** 2025-11-01 13:17:13.848734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 13:17:13.848753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 13:17:13.848770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 13:17:13.848788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.848799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.848811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.848822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.848840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.848863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.848875 | orchestrator | 2025-11-01 13:17:13.848886 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-11-01 13:17:13.848897 | orchestrator | Saturday 01 November 2025 13:15:45 +0000 (0:00:05.178) 0:00:55.173 ***** 2025-11-01 13:17:13.848908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 13:17:13.848920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:17:13.848947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:17:13.848958 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:17:13.848977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 13:17:13.848996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:17:13.849013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:17:13.849025 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:17:13.849079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 13:17:13.849092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:17:13.849104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:17:13.849115 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:17:13.849126 | orchestrator | 2025-11-01 13:17:13.849138 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-11-01 13:17:13.849148 | orchestrator | Saturday 01 November 2025 13:15:48 +0000 (0:00:03.357) 0:00:58.531 ***** 2025-11-01 13:17:13.849173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 13:17:13.849191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:17:13.849203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:17:13.849214 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:17:13.849226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 13:17:13.849237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:17:13.849249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:17:13.849273 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:17:13.849316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 13:17:13.849330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:17:13.849341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:17:13.849353 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:17:13.849364 | orchestrator | 2025-11-01 13:17:13.849375 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-11-01 13:17:13.849386 | orchestrator | Saturday 01 November 2025 13:15:51 +0000 (0:00:02.442) 0:01:00.973 ***** 2025-11-01 13:17:13.849397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 13:17:13.849707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 13:17:13.849746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 13:17:13.849759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.849771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.849782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.849794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.849819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.849831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.849842 | orchestrator | 2025-11-01 13:17:13.849853 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-11-01 13:17:13.849864 | orchestrator | Saturday 01 November 2025 13:15:57 +0000 (0:00:05.658) 0:01:06.631 ***** 2025-11-01 13:17:13.849880 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:17:13.849891 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:17:13.849902 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:17:13.849913 | orchestrator | 2025-11-01 13:17:13.849924 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-11-01 13:17:13.849935 | orchestrator | Saturday 01 November 2025 13:16:00 +0000 (0:00:03.550) 0:01:10.181 ***** 2025-11-01 13:17:13.849946 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 13:17:13.849957 | orchestrator | 2025-11-01 13:17:13.849968 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-11-01 13:17:13.849978 | orchestrator | Saturday 01 November 2025 13:16:01 +0000 (0:00:01.228) 0:01:11.410 ***** 2025-11-01 13:17:13.849989 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:17:13.850000 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:17:13.850011 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:17:13.850071 | orchestrator | 2025-11-01 13:17:13.850083 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-11-01 13:17:13.850094 | orchestrator | Saturday 01 November 2025 13:16:03 +0000 (0:00:01.391) 0:01:12.801 ***** 2025-11-01 13:17:13.850105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 13:17:13.850124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 13:17:13.850143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 13:17:13.850160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.850173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.850184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.850196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.850213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.850225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.850236 | orchestrator | 2025-11-01 13:17:13.850247 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-11-01 13:17:13.850282 | orchestrator | Saturday 01 November 2025 13:16:17 +0000 (0:00:14.264) 0:01:27.066 ***** 2025-11-01 13:17:13.850329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 13:17:13.850343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:17:13.850356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:17:13.850392 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:17:13.850406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 13:17:13.850419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:17:13.850439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:17:13.850452 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:17:13.850469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-11-01 13:17:13.850483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-11-01 13:17:13.850503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:17:13.850515 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:17:13.850528 | orchestrator | 2025-11-01 13:17:13.850541 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-11-01 13:17:13.850553 | orchestrator | Saturday 01 November 2025 13:16:20 +0000 (0:00:02.787) 0:01:29.853 ***** 2025-11-01 13:17:13.850566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 13:17:13.850586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 13:17:13.850604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-11-01 13:17:13.850617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.850635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.850646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.850658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.850677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.850697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:17:13.850708 | orchestrator | 2025-11-01 13:17:13.850719 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-11-01 13:17:13.850730 | orchestrator | Saturday 01 November 2025 13:16:25 +0000 (0:00:04.885) 0:01:34.739 ***** 2025-11-01 13:17:13.850741 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:17:13.850752 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:17:13.850762 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:17:13.850773 | orchestrator | 2025-11-01 13:17:13.850791 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-11-01 13:17:13.850802 | orchestrator | Saturday 01 November 2025 13:16:25 +0000 (0:00:00.736) 0:01:35.475 ***** 2025-11-01 13:17:13.850812 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:17:13.850823 | orchestrator | 2025-11-01 13:17:13.850834 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-11-01 13:17:13.850844 | orchestrator | Saturday 01 November 2025 13:16:28 +0000 (0:00:02.818) 0:01:38.293 ***** 2025-11-01 13:17:13.850855 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:17:13.850866 | orchestrator | 2025-11-01 13:17:13.850876 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-11-01 13:17:13.850887 | orchestrator | Saturday 01 November 2025 13:16:31 +0000 (0:00:02.709) 0:01:41.002 ***** 2025-11-01 13:17:13.850898 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:17:13.850909 | orchestrator | 2025-11-01 13:17:13.850919 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-11-01 13:17:13.850930 | orchestrator | Saturday 01 November 2025 13:16:45 +0000 (0:00:14.231) 0:01:55.234 ***** 2025-11-01 13:17:13.850941 | orchestrator | 2025-11-01 13:17:13.850951 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-11-01 13:17:13.850962 | orchestrator | Saturday 01 November 2025 13:16:45 +0000 (0:00:00.083) 0:01:55.317 ***** 2025-11-01 13:17:13.850973 | orchestrator | 2025-11-01 13:17:13.850983 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-11-01 13:17:13.850994 | orchestrator | Saturday 01 November 2025 13:16:45 +0000 (0:00:00.122) 0:01:55.440 ***** 2025-11-01 13:17:13.851005 | orchestrator | 2025-11-01 13:17:13.851015 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-11-01 13:17:13.851026 | orchestrator | Saturday 01 November 2025 13:16:45 +0000 (0:00:00.088) 0:01:55.528 ***** 2025-11-01 13:17:13.851037 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:17:13.851047 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:17:13.851058 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:17:13.851069 | orchestrator | 2025-11-01 13:17:13.851080 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-11-01 13:17:13.851090 | orchestrator | Saturday 01 November 2025 13:16:55 +0000 (0:00:09.475) 0:02:05.004 ***** 2025-11-01 13:17:13.851101 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:17:13.851112 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:17:13.851122 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:17:13.851133 | orchestrator | 2025-11-01 13:17:13.851144 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-11-01 13:17:13.851154 | orchestrator | Saturday 01 November 2025 13:17:03 +0000 (0:00:07.963) 0:02:12.968 ***** 2025-11-01 13:17:13.851165 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:17:13.851176 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:17:13.851186 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:17:13.851197 | orchestrator | 2025-11-01 13:17:13.851207 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:17:13.851219 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-01 13:17:13.851231 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 13:17:13.851242 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 13:17:13.851253 | orchestrator | 2025-11-01 13:17:13.851264 | orchestrator | 2025-11-01 13:17:13.851274 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:17:13.851285 | orchestrator | Saturday 01 November 2025 13:17:11 +0000 (0:00:08.005) 0:02:20.974 ***** 2025-11-01 13:17:13.851327 | orchestrator | =============================================================================== 2025-11-01 13:17:13.851346 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 18.93s 2025-11-01 13:17:13.851362 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 14.26s 2025-11-01 13:17:13.851373 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 14.23s 2025-11-01 13:17:13.851384 | orchestrator | barbican : Restart barbican-api container ------------------------------- 9.48s 2025-11-01 13:17:13.851395 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.01s 2025-11-01 13:17:13.851406 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 7.96s 2025-11-01 13:17:13.851416 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.38s 2025-11-01 13:17:13.851427 | orchestrator | barbican : Copying over config.json files for services ------------------ 5.66s 2025-11-01 13:17:13.851438 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 5.18s 2025-11-01 13:17:13.851448 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.89s 2025-11-01 13:17:13.851459 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.62s 2025-11-01 13:17:13.851475 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.42s 2025-11-01 13:17:13.851486 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.04s 2025-11-01 13:17:13.851497 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.92s 2025-11-01 13:17:13.851507 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.55s 2025-11-01 13:17:13.851518 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 3.36s 2025-11-01 13:17:13.851529 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.82s 2025-11-01 13:17:13.851539 | orchestrator | barbican : Copying over existing policy file ---------------------------- 2.79s 2025-11-01 13:17:13.851550 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.71s 2025-11-01 13:17:13.851561 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 2.44s 2025-11-01 13:17:13.851572 | orchestrator | 2025-11-01 13:17:13 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:17:13.851686 | orchestrator | 2025-11-01 13:17:13 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:17:13.851702 | orchestrator | 2025-11-01 13:17:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:17:16.882225 | orchestrator | 2025-11-01 13:17:16 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:17:16.883805 | orchestrator | 2025-11-01 13:17:16 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:17:16.886611 | orchestrator | 2025-11-01 13:17:16 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:17:16.889486 | orchestrator | 2025-11-01 13:17:16 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:17:16.889508 | orchestrator | 2025-11-01 13:17:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:17:19.927713 | orchestrator | 2025-11-01 13:17:19 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:17:19.928390 | orchestrator | 2025-11-01 13:17:19 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:17:20.289702 | orchestrator | 2025-11-01 13:17:19 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:17:20.289774 | orchestrator | 2025-11-01 13:17:19 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:17:20.289789 | orchestrator | 2025-11-01 13:17:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:17:22.961117 | orchestrator | 2025-11-01 13:17:22 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:17:22.962256 | orchestrator | 2025-11-01 13:17:22 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:17:22.966412 | orchestrator | 2025-11-01 13:17:22 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:17:22.967314 | orchestrator | 2025-11-01 13:17:22 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:17:22.967336 | orchestrator | 2025-11-01 13:17:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:17:26.001115 | orchestrator | 2025-11-01 13:17:25 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:17:26.001183 | orchestrator | 2025-11-01 13:17:25 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:17:26.002682 | orchestrator | 2025-11-01 13:17:26 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:17:26.003171 | orchestrator | 2025-11-01 13:17:26 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:17:26.003190 | orchestrator | 2025-11-01 13:17:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:17:29.069442 | orchestrator | 2025-11-01 13:17:29 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:17:29.069685 | orchestrator | 2025-11-01 13:17:29 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:17:29.070582 | orchestrator | 2025-11-01 13:17:29 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:17:29.072442 | orchestrator | 2025-11-01 13:17:29 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:17:29.072467 | orchestrator | 2025-11-01 13:17:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:17:32.124492 | orchestrator | 2025-11-01 13:17:32 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:17:32.125061 | orchestrator | 2025-11-01 13:17:32 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:17:32.127758 | orchestrator | 2025-11-01 13:17:32 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:17:32.129753 | orchestrator | 2025-11-01 13:17:32 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:17:32.129794 | orchestrator | 2025-11-01 13:17:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:17:35.161604 | orchestrator | 2025-11-01 13:17:35 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:17:35.161762 | orchestrator | 2025-11-01 13:17:35 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:17:35.162457 | orchestrator | 2025-11-01 13:17:35 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:17:35.163344 | orchestrator | 2025-11-01 13:17:35 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:17:35.163408 | orchestrator | 2025-11-01 13:17:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:17:38.198795 | orchestrator | 2025-11-01 13:17:38 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:17:38.199689 | orchestrator | 2025-11-01 13:17:38 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:17:38.200566 | orchestrator | 2025-11-01 13:17:38 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:17:38.201644 | orchestrator | 2025-11-01 13:17:38 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:17:38.201765 | orchestrator | 2025-11-01 13:17:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:17:41.252603 | orchestrator | 2025-11-01 13:17:41 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:17:41.252646 | orchestrator | 2025-11-01 13:17:41 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:17:41.252650 | orchestrator | 2025-11-01 13:17:41 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:17:41.252654 | orchestrator | 2025-11-01 13:17:41 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:17:41.252657 | orchestrator | 2025-11-01 13:17:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:17:44.314279 | orchestrator | 2025-11-01 13:17:44 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:17:44.314330 | orchestrator | 2025-11-01 13:17:44 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:17:44.314337 | orchestrator | 2025-11-01 13:17:44 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:17:44.314343 | orchestrator | 2025-11-01 13:17:44 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:17:44.314348 | orchestrator | 2025-11-01 13:17:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:17:47.347696 | orchestrator | 2025-11-01 13:17:47 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:17:47.498061 | orchestrator | 2025-11-01 13:17:47 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:17:47.498106 | orchestrator | 2025-11-01 13:17:47 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:17:47.498113 | orchestrator | 2025-11-01 13:17:47 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:17:47.498120 | orchestrator | 2025-11-01 13:17:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:17:50.382938 | orchestrator | 2025-11-01 13:17:50 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:17:50.385020 | orchestrator | 2025-11-01 13:17:50 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:17:50.385857 | orchestrator | 2025-11-01 13:17:50 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:17:50.386778 | orchestrator | 2025-11-01 13:17:50 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:17:50.386936 | orchestrator | 2025-11-01 13:17:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:17:53.469849 | orchestrator | 2025-11-01 13:17:53 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:17:53.470743 | orchestrator | 2025-11-01 13:17:53 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:17:53.471769 | orchestrator | 2025-11-01 13:17:53 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:17:53.472930 | orchestrator | 2025-11-01 13:17:53 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:17:53.472973 | orchestrator | 2025-11-01 13:17:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:17:56.499216 | orchestrator | 2025-11-01 13:17:56 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:17:56.499733 | orchestrator | 2025-11-01 13:17:56 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:17:56.500525 | orchestrator | 2025-11-01 13:17:56 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:17:56.501425 | orchestrator | 2025-11-01 13:17:56 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:17:56.501449 | orchestrator | 2025-11-01 13:17:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:17:59.528929 | orchestrator | 2025-11-01 13:17:59 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:17:59.529013 | orchestrator | 2025-11-01 13:17:59 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:17:59.529837 | orchestrator | 2025-11-01 13:17:59 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:17:59.530541 | orchestrator | 2025-11-01 13:17:59 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:17:59.531030 | orchestrator | 2025-11-01 13:17:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:18:02.550862 | orchestrator | 2025-11-01 13:18:02 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:18:02.550937 | orchestrator | 2025-11-01 13:18:02 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:18:02.551530 | orchestrator | 2025-11-01 13:18:02 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:18:02.552200 | orchestrator | 2025-11-01 13:18:02 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:18:02.552222 | orchestrator | 2025-11-01 13:18:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:18:05.578204 | orchestrator | 2025-11-01 13:18:05 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:18:05.578274 | orchestrator | 2025-11-01 13:18:05 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:18:05.578670 | orchestrator | 2025-11-01 13:18:05 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:18:05.579244 | orchestrator | 2025-11-01 13:18:05 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:18:05.579466 | orchestrator | 2025-11-01 13:18:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:18:08.608173 | orchestrator | 2025-11-01 13:18:08 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:18:08.609362 | orchestrator | 2025-11-01 13:18:08 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:18:08.611181 | orchestrator | 2025-11-01 13:18:08 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:18:08.612632 | orchestrator | 2025-11-01 13:18:08 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:18:08.612654 | orchestrator | 2025-11-01 13:18:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:18:11.652372 | orchestrator | 2025-11-01 13:18:11 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:18:11.653257 | orchestrator | 2025-11-01 13:18:11 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:18:11.654691 | orchestrator | 2025-11-01 13:18:11 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:18:11.655976 | orchestrator | 2025-11-01 13:18:11 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:18:11.656001 | orchestrator | 2025-11-01 13:18:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:18:14.698971 | orchestrator | 2025-11-01 13:18:14 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:18:14.699083 | orchestrator | 2025-11-01 13:18:14 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:18:14.701410 | orchestrator | 2025-11-01 13:18:14 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:18:14.702126 | orchestrator | 2025-11-01 13:18:14 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:18:14.702152 | orchestrator | 2025-11-01 13:18:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:18:17.761470 | orchestrator | 2025-11-01 13:18:17 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:18:17.761556 | orchestrator | 2025-11-01 13:18:17 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:18:17.763980 | orchestrator | 2025-11-01 13:18:17 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:18:17.769977 | orchestrator | 2025-11-01 13:18:17 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:18:17.770055 | orchestrator | 2025-11-01 13:18:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:18:20.811931 | orchestrator | 2025-11-01 13:18:20 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:18:20.814901 | orchestrator | 2025-11-01 13:18:20 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:18:20.817970 | orchestrator | 2025-11-01 13:18:20 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:18:20.820223 | orchestrator | 2025-11-01 13:18:20 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:18:20.820248 | orchestrator | 2025-11-01 13:18:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:18:23.855968 | orchestrator | 2025-11-01 13:18:23 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:18:23.857070 | orchestrator | 2025-11-01 13:18:23 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:18:23.858747 | orchestrator | 2025-11-01 13:18:23 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:18:23.860041 | orchestrator | 2025-11-01 13:18:23 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:18:23.860059 | orchestrator | 2025-11-01 13:18:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:18:26.891032 | orchestrator | 2025-11-01 13:18:26 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:18:26.891600 | orchestrator | 2025-11-01 13:18:26 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:18:26.892665 | orchestrator | 2025-11-01 13:18:26 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:18:26.893655 | orchestrator | 2025-11-01 13:18:26 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:18:26.893730 | orchestrator | 2025-11-01 13:18:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:18:29.930355 | orchestrator | 2025-11-01 13:18:29 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:18:29.932282 | orchestrator | 2025-11-01 13:18:29 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:18:29.933043 | orchestrator | 2025-11-01 13:18:29 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:18:29.934051 | orchestrator | 2025-11-01 13:18:29 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:18:29.934108 | orchestrator | 2025-11-01 13:18:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:18:32.976989 | orchestrator | 2025-11-01 13:18:32 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:18:32.978599 | orchestrator | 2025-11-01 13:18:32 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:18:32.980922 | orchestrator | 2025-11-01 13:18:32 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:18:32.984882 | orchestrator | 2025-11-01 13:18:32 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:18:32.984907 | orchestrator | 2025-11-01 13:18:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:18:36.017724 | orchestrator | 2025-11-01 13:18:36 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:18:36.020536 | orchestrator | 2025-11-01 13:18:36 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:18:36.021459 | orchestrator | 2025-11-01 13:18:36 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:18:36.024400 | orchestrator | 2025-11-01 13:18:36 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:18:36.024425 | orchestrator | 2025-11-01 13:18:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:18:39.076435 | orchestrator | 2025-11-01 13:18:39 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:18:39.077908 | orchestrator | 2025-11-01 13:18:39 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:18:39.079223 | orchestrator | 2025-11-01 13:18:39 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:18:39.080988 | orchestrator | 2025-11-01 13:18:39 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:18:39.081010 | orchestrator | 2025-11-01 13:18:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:18:42.143988 | orchestrator | 2025-11-01 13:18:42 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:18:42.144453 | orchestrator | 2025-11-01 13:18:42 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:18:42.146589 | orchestrator | 2025-11-01 13:18:42 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:18:42.149187 | orchestrator | 2025-11-01 13:18:42 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:18:42.149210 | orchestrator | 2025-11-01 13:18:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:18:45.191827 | orchestrator | 2025-11-01 13:18:45 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:18:45.193435 | orchestrator | 2025-11-01 13:18:45 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:18:45.195079 | orchestrator | 2025-11-01 13:18:45 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:18:45.195791 | orchestrator | 2025-11-01 13:18:45 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:18:45.195903 | orchestrator | 2025-11-01 13:18:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:18:48.231217 | orchestrator | 2025-11-01 13:18:48 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:18:48.231412 | orchestrator | 2025-11-01 13:18:48 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:18:48.234340 | orchestrator | 2025-11-01 13:18:48 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:18:48.235103 | orchestrator | 2025-11-01 13:18:48 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:18:48.235129 | orchestrator | 2025-11-01 13:18:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:18:51.287552 | orchestrator | 2025-11-01 13:18:51 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:18:51.290447 | orchestrator | 2025-11-01 13:18:51 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:18:51.291269 | orchestrator | 2025-11-01 13:18:51 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:18:51.292160 | orchestrator | 2025-11-01 13:18:51 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:18:51.292183 | orchestrator | 2025-11-01 13:18:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:18:54.336458 | orchestrator | 2025-11-01 13:18:54 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:18:54.336589 | orchestrator | 2025-11-01 13:18:54 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:18:54.337569 | orchestrator | 2025-11-01 13:18:54 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:18:54.338811 | orchestrator | 2025-11-01 13:18:54 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:18:54.338830 | orchestrator | 2025-11-01 13:18:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:18:57.382361 | orchestrator | 2025-11-01 13:18:57 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:18:57.382834 | orchestrator | 2025-11-01 13:18:57 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:18:57.384229 | orchestrator | 2025-11-01 13:18:57 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:18:57.389791 | orchestrator | 2025-11-01 13:18:57 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:18:57.389841 | orchestrator | 2025-11-01 13:18:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:19:00.424241 | orchestrator | 2025-11-01 13:19:00 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:19:00.424638 | orchestrator | 2025-11-01 13:19:00 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:19:00.425479 | orchestrator | 2025-11-01 13:19:00 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:19:00.428386 | orchestrator | 2025-11-01 13:19:00 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:19:00.428492 | orchestrator | 2025-11-01 13:19:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:19:03.464883 | orchestrator | 2025-11-01 13:19:03 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:19:03.467121 | orchestrator | 2025-11-01 13:19:03 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:19:03.468255 | orchestrator | 2025-11-01 13:19:03 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:19:03.469962 | orchestrator | 2025-11-01 13:19:03 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:19:03.470100 | orchestrator | 2025-11-01 13:19:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:19:06.525294 | orchestrator | 2025-11-01 13:19:06 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:19:06.526262 | orchestrator | 2025-11-01 13:19:06 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:19:06.529668 | orchestrator | 2025-11-01 13:19:06 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:19:06.530909 | orchestrator | 2025-11-01 13:19:06 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:19:06.530934 | orchestrator | 2025-11-01 13:19:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:19:09.564649 | orchestrator | 2025-11-01 13:19:09 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:19:09.565718 | orchestrator | 2025-11-01 13:19:09 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:19:09.566703 | orchestrator | 2025-11-01 13:19:09 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:19:09.567713 | orchestrator | 2025-11-01 13:19:09 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:19:09.567759 | orchestrator | 2025-11-01 13:19:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:19:12.608807 | orchestrator | 2025-11-01 13:19:12 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:19:12.609496 | orchestrator | 2025-11-01 13:19:12 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:19:12.611279 | orchestrator | 2025-11-01 13:19:12 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:19:12.612240 | orchestrator | 2025-11-01 13:19:12 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:19:12.612486 | orchestrator | 2025-11-01 13:19:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:19:15.659471 | orchestrator | 2025-11-01 13:19:15 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:19:15.660435 | orchestrator | 2025-11-01 13:19:15 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:19:15.661306 | orchestrator | 2025-11-01 13:19:15 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:19:15.662348 | orchestrator | 2025-11-01 13:19:15 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:19:15.662621 | orchestrator | 2025-11-01 13:19:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:19:18.719416 | orchestrator | 2025-11-01 13:19:18 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:19:18.720205 | orchestrator | 2025-11-01 13:19:18 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:19:18.721545 | orchestrator | 2025-11-01 13:19:18 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:19:18.723444 | orchestrator | 2025-11-01 13:19:18 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:19:18.723484 | orchestrator | 2025-11-01 13:19:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:19:21.762439 | orchestrator | 2025-11-01 13:19:21 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:19:21.763099 | orchestrator | 2025-11-01 13:19:21 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:19:21.765632 | orchestrator | 2025-11-01 13:19:21 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:19:21.767095 | orchestrator | 2025-11-01 13:19:21 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:19:21.767339 | orchestrator | 2025-11-01 13:19:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:19:24.809812 | orchestrator | 2025-11-01 13:19:24 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:19:24.810840 | orchestrator | 2025-11-01 13:19:24 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:19:24.812040 | orchestrator | 2025-11-01 13:19:24 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:19:24.813083 | orchestrator | 2025-11-01 13:19:24 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:19:24.813106 | orchestrator | 2025-11-01 13:19:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:19:27.849017 | orchestrator | 2025-11-01 13:19:27 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:19:27.850370 | orchestrator | 2025-11-01 13:19:27 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:19:27.851593 | orchestrator | 2025-11-01 13:19:27 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:19:27.853237 | orchestrator | 2025-11-01 13:19:27 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:19:27.853262 | orchestrator | 2025-11-01 13:19:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:19:30.891463 | orchestrator | 2025-11-01 13:19:30 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:19:30.892053 | orchestrator | 2025-11-01 13:19:30 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:19:30.896635 | orchestrator | 2025-11-01 13:19:30 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:19:30.896658 | orchestrator | 2025-11-01 13:19:30 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:19:30.896666 | orchestrator | 2025-11-01 13:19:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:19:33.939584 | orchestrator | 2025-11-01 13:19:33 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:19:33.940227 | orchestrator | 2025-11-01 13:19:33 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:19:33.941698 | orchestrator | 2025-11-01 13:19:33 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:19:33.943964 | orchestrator | 2025-11-01 13:19:33 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:19:33.944043 | orchestrator | 2025-11-01 13:19:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:19:36.985866 | orchestrator | 2025-11-01 13:19:36 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:19:36.987709 | orchestrator | 2025-11-01 13:19:36 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:19:36.988960 | orchestrator | 2025-11-01 13:19:36 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:19:36.990550 | orchestrator | 2025-11-01 13:19:36 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:19:36.990580 | orchestrator | 2025-11-01 13:19:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:19:40.062248 | orchestrator | 2025-11-01 13:19:40 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:19:40.062421 | orchestrator | 2025-11-01 13:19:40 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:19:40.062437 | orchestrator | 2025-11-01 13:19:40 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:19:40.062561 | orchestrator | 2025-11-01 13:19:40 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:19:40.062579 | orchestrator | 2025-11-01 13:19:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:19:43.107211 | orchestrator | 2025-11-01 13:19:43 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:19:43.109629 | orchestrator | 2025-11-01 13:19:43 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:19:43.111424 | orchestrator | 2025-11-01 13:19:43 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:19:43.112793 | orchestrator | 2025-11-01 13:19:43 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:19:43.112815 | orchestrator | 2025-11-01 13:19:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:19:46.164401 | orchestrator | 2025-11-01 13:19:46 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:19:46.166848 | orchestrator | 2025-11-01 13:19:46 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:19:46.170465 | orchestrator | 2025-11-01 13:19:46 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:19:46.172981 | orchestrator | 2025-11-01 13:19:46 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:19:46.173758 | orchestrator | 2025-11-01 13:19:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:19:49.219068 | orchestrator | 2025-11-01 13:19:49 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:19:49.219277 | orchestrator | 2025-11-01 13:19:49 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:19:49.221339 | orchestrator | 2025-11-01 13:19:49 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:19:49.222814 | orchestrator | 2025-11-01 13:19:49 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:19:49.222839 | orchestrator | 2025-11-01 13:19:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:19:52.279760 | orchestrator | 2025-11-01 13:19:52 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:19:52.281969 | orchestrator | 2025-11-01 13:19:52 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:19:52.284945 | orchestrator | 2025-11-01 13:19:52 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:19:52.286630 | orchestrator | 2025-11-01 13:19:52 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:19:52.287211 | orchestrator | 2025-11-01 13:19:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:19:55.334656 | orchestrator | 2025-11-01 13:19:55 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:19:55.336260 | orchestrator | 2025-11-01 13:19:55 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state STARTED 2025-11-01 13:19:55.338799 | orchestrator | 2025-11-01 13:19:55 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:19:55.340584 | orchestrator | 2025-11-01 13:19:55 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:19:55.340645 | orchestrator | 2025-11-01 13:19:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:19:58.390970 | orchestrator | 2025-11-01 13:19:58 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:19:58.394184 | orchestrator | 2025-11-01 13:19:58 | INFO  | Task c55213b4-58ce-4b76-9bd2-e23eac18b80a is in state SUCCESS 2025-11-01 13:19:58.394895 | orchestrator | 2025-11-01 13:19:58.396695 | orchestrator | 2025-11-01 13:19:58.396727 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:19:58.396739 | orchestrator | 2025-11-01 13:19:58.396750 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:19:58.396762 | orchestrator | Saturday 01 November 2025 13:14:47 +0000 (0:00:00.296) 0:00:00.296 ***** 2025-11-01 13:19:58.396802 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:19:58.396816 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:19:58.396827 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:19:58.396838 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:19:58.396849 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:19:58.396860 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:19:58.396899 | orchestrator | 2025-11-01 13:19:58.396910 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:19:58.396921 | orchestrator | Saturday 01 November 2025 13:14:48 +0000 (0:00:00.763) 0:00:01.060 ***** 2025-11-01 13:19:58.396932 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-11-01 13:19:58.396943 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-11-01 13:19:58.396954 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-11-01 13:19:58.396965 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-11-01 13:19:58.396975 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-11-01 13:19:58.396986 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-11-01 13:19:58.396996 | orchestrator | 2025-11-01 13:19:58.397008 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-11-01 13:19:58.397018 | orchestrator | 2025-11-01 13:19:58.397029 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-11-01 13:19:58.397040 | orchestrator | Saturday 01 November 2025 13:14:49 +0000 (0:00:00.699) 0:00:01.759 ***** 2025-11-01 13:19:58.397052 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:19:58.397064 | orchestrator | 2025-11-01 13:19:58.397075 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-11-01 13:19:58.397086 | orchestrator | Saturday 01 November 2025 13:14:50 +0000 (0:00:01.680) 0:00:03.440 ***** 2025-11-01 13:19:58.397140 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:19:58.397166 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:19:58.397177 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:19:58.397188 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:19:58.397199 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:19:58.397209 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:19:58.397220 | orchestrator | 2025-11-01 13:19:58.397232 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-11-01 13:19:58.397243 | orchestrator | Saturday 01 November 2025 13:14:52 +0000 (0:00:01.955) 0:00:05.396 ***** 2025-11-01 13:19:58.397254 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:19:58.397450 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:19:58.397464 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:19:58.397477 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:19:58.397490 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:19:58.397502 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:19:58.397514 | orchestrator | 2025-11-01 13:19:58.397526 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-11-01 13:19:58.397539 | orchestrator | Saturday 01 November 2025 13:14:53 +0000 (0:00:01.236) 0:00:06.632 ***** 2025-11-01 13:19:58.397552 | orchestrator | ok: [testbed-node-0] => { 2025-11-01 13:19:58.397565 | orchestrator |  "changed": false, 2025-11-01 13:19:58.397577 | orchestrator |  "msg": "All assertions passed" 2025-11-01 13:19:58.397589 | orchestrator | } 2025-11-01 13:19:58.397602 | orchestrator | ok: [testbed-node-1] => { 2025-11-01 13:19:58.397630 | orchestrator |  "changed": false, 2025-11-01 13:19:58.397643 | orchestrator |  "msg": "All assertions passed" 2025-11-01 13:19:58.397655 | orchestrator | } 2025-11-01 13:19:58.397667 | orchestrator | ok: [testbed-node-2] => { 2025-11-01 13:19:58.397678 | orchestrator |  "changed": false, 2025-11-01 13:19:58.397688 | orchestrator |  "msg": "All assertions passed" 2025-11-01 13:19:58.397699 | orchestrator | } 2025-11-01 13:19:58.397710 | orchestrator | ok: [testbed-node-3] => { 2025-11-01 13:19:58.397721 | orchestrator |  "changed": false, 2025-11-01 13:19:58.397731 | orchestrator |  "msg": "All assertions passed" 2025-11-01 13:19:58.397742 | orchestrator | } 2025-11-01 13:19:58.397753 | orchestrator | ok: [testbed-node-4] => { 2025-11-01 13:19:58.397764 | orchestrator |  "changed": false, 2025-11-01 13:19:58.397774 | orchestrator |  "msg": "All assertions passed" 2025-11-01 13:19:58.397785 | orchestrator | } 2025-11-01 13:19:58.397796 | orchestrator | ok: [testbed-node-5] => { 2025-11-01 13:19:58.397806 | orchestrator |  "changed": false, 2025-11-01 13:19:58.397817 | orchestrator |  "msg": "All assertions passed" 2025-11-01 13:19:58.397828 | orchestrator | } 2025-11-01 13:19:58.397839 | orchestrator | 2025-11-01 13:19:58.397862 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-11-01 13:19:58.397874 | orchestrator | Saturday 01 November 2025 13:14:54 +0000 (0:00:00.909) 0:00:07.542 ***** 2025-11-01 13:19:58.397884 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.397895 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.397906 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.397917 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.397927 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.397938 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.397949 | orchestrator | 2025-11-01 13:19:58.397960 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-11-01 13:19:58.397971 | orchestrator | Saturday 01 November 2025 13:14:55 +0000 (0:00:00.698) 0:00:08.241 ***** 2025-11-01 13:19:58.397981 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-11-01 13:19:58.397992 | orchestrator | 2025-11-01 13:19:58.398003 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-11-01 13:19:58.398014 | orchestrator | Saturday 01 November 2025 13:14:59 +0000 (0:00:03.878) 0:00:12.120 ***** 2025-11-01 13:19:58.398124 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-11-01 13:19:58.398136 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-11-01 13:19:58.398148 | orchestrator | 2025-11-01 13:19:58.398173 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-11-01 13:19:58.398185 | orchestrator | Saturday 01 November 2025 13:15:06 +0000 (0:00:07.443) 0:00:19.563 ***** 2025-11-01 13:19:58.398196 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-01 13:19:58.398207 | orchestrator | 2025-11-01 13:19:58.398217 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-11-01 13:19:58.398228 | orchestrator | Saturday 01 November 2025 13:15:10 +0000 (0:00:03.886) 0:00:23.450 ***** 2025-11-01 13:19:58.398239 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-01 13:19:58.398250 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-11-01 13:19:58.398260 | orchestrator | 2025-11-01 13:19:58.398271 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-11-01 13:19:58.398282 | orchestrator | Saturday 01 November 2025 13:15:15 +0000 (0:00:04.732) 0:00:28.182 ***** 2025-11-01 13:19:58.398292 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-01 13:19:58.398303 | orchestrator | 2025-11-01 13:19:58.398333 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-11-01 13:19:58.398344 | orchestrator | Saturday 01 November 2025 13:15:19 +0000 (0:00:03.853) 0:00:32.036 ***** 2025-11-01 13:19:58.398355 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-11-01 13:19:58.398376 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-11-01 13:19:58.398386 | orchestrator | 2025-11-01 13:19:58.398405 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-11-01 13:19:58.398416 | orchestrator | Saturday 01 November 2025 13:15:27 +0000 (0:00:07.882) 0:00:39.919 ***** 2025-11-01 13:19:58.398426 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.398437 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.398448 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.398459 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.398470 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.398481 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.398491 | orchestrator | 2025-11-01 13:19:58.398502 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-11-01 13:19:58.398513 | orchestrator | Saturday 01 November 2025 13:15:28 +0000 (0:00:00.907) 0:00:40.827 ***** 2025-11-01 13:19:58.398524 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.398535 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.398545 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.398556 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.398566 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.398577 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.398588 | orchestrator | 2025-11-01 13:19:58.398598 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-11-01 13:19:58.398609 | orchestrator | Saturday 01 November 2025 13:15:30 +0000 (0:00:02.401) 0:00:43.229 ***** 2025-11-01 13:19:58.398620 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:19:58.398631 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:19:58.398642 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:19:58.398653 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:19:58.398663 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:19:58.398674 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:19:58.398684 | orchestrator | 2025-11-01 13:19:58.398695 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-11-01 13:19:58.398706 | orchestrator | Saturday 01 November 2025 13:15:31 +0000 (0:00:01.179) 0:00:44.409 ***** 2025-11-01 13:19:58.398716 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.398727 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.398738 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.398748 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.398759 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.398770 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.398780 | orchestrator | 2025-11-01 13:19:58.398791 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-11-01 13:19:58.398802 | orchestrator | Saturday 01 November 2025 13:15:34 +0000 (0:00:02.346) 0:00:46.755 ***** 2025-11-01 13:19:58.398816 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 13:19:58.398848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:19:58.398874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:19:58.398886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:19:58.398898 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 13:19:58.398910 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 13:19:58.398921 | orchestrator | 2025-11-01 13:19:58.398932 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-11-01 13:19:58.398953 | orchestrator | Saturday 01 November 2025 13:15:37 +0000 (0:00:03.636) 0:00:50.392 ***** 2025-11-01 13:19:58.398964 | orchestrator | [WARNING]: Skipped 2025-11-01 13:19:58.398975 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-11-01 13:19:58.398986 | orchestrator | due to this access issue: 2025-11-01 13:19:58.398997 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-11-01 13:19:58.399008 | orchestrator | a directory 2025-11-01 13:19:58.399018 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 13:19:58.399029 | orchestrator | 2025-11-01 13:19:58.399040 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-11-01 13:19:58.399057 | orchestrator | Saturday 01 November 2025 13:15:38 +0000 (0:00:01.105) 0:00:51.498 ***** 2025-11-01 13:19:58.399069 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:19:58.399081 | orchestrator | 2025-11-01 13:19:58.399092 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-11-01 13:19:58.399103 | orchestrator | Saturday 01 November 2025 13:15:40 +0000 (0:00:01.485) 0:00:52.984 ***** 2025-11-01 13:19:58.399119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:19:58.399132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:19:58.399143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:19:58.399155 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 13:19:58.399182 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 13:19:58.399199 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 13:19:58.399211 | orchestrator | 2025-11-01 13:19:58.399222 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-11-01 13:19:58.399233 | orchestrator | Saturday 01 November 2025 13:15:44 +0000 (0:00:04.577) 0:00:57.561 ***** 2025-11-01 13:19:58.399244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.399255 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.399267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.399285 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.399297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.399340 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.399353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.399364 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.399381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.399392 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.399404 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.399415 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.399426 | orchestrator | 2025-11-01 13:19:58.399437 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-11-01 13:19:58.399454 | orchestrator | Saturday 01 November 2025 13:15:50 +0000 (0:00:05.417) 0:01:02.979 ***** 2025-11-01 13:19:58.399466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.399477 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.399495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.399507 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.399528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.399540 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.399551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.399562 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.399573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.399591 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.399602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.399613 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.399624 | orchestrator | 2025-11-01 13:19:58.399634 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-11-01 13:19:58.399645 | orchestrator | Saturday 01 November 2025 13:15:55 +0000 (0:00:05.386) 0:01:08.365 ***** 2025-11-01 13:19:58.399656 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.399667 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.399677 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.399688 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.399698 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.399709 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.399720 | orchestrator | 2025-11-01 13:19:58.399756 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-11-01 13:19:58.399773 | orchestrator | Saturday 01 November 2025 13:15:59 +0000 (0:00:03.934) 0:01:12.300 ***** 2025-11-01 13:19:58.399785 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.399796 | orchestrator | 2025-11-01 13:19:58.399806 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-11-01 13:19:58.399817 | orchestrator | Saturday 01 November 2025 13:15:59 +0000 (0:00:00.187) 0:01:12.487 ***** 2025-11-01 13:19:58.399828 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.399838 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.399849 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.399859 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.399870 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.399881 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.399891 | orchestrator | 2025-11-01 13:19:58.399902 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-11-01 13:19:58.399913 | orchestrator | Saturday 01 November 2025 13:16:00 +0000 (0:00:00.884) 0:01:13.372 ***** 2025-11-01 13:19:58.399929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.399948 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.399959 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.399970 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.399982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.399993 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.400010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.400022 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.400033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.400044 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.400060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.400077 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.400088 | orchestrator | 2025-11-01 13:19:58.400099 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-11-01 13:19:58.400110 | orchestrator | Saturday 01 November 2025 13:16:03 +0000 (0:00:02.979) 0:01:16.351 ***** 2025-11-01 13:19:58.400121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:19:58.400133 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 13:19:58.400151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:19:58.400168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:19:58.400188 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 13:19:58.400200 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 13:19:58.400212 | orchestrator | 2025-11-01 13:19:58.400223 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-11-01 13:19:58.400234 | orchestrator | Saturday 01 November 2025 13:16:11 +0000 (0:00:07.556) 0:01:23.908 ***** 2025-11-01 13:19:58.400245 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 13:19:58.400263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:19:58.400279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:19:58.400299 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 13:19:58.400310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:19:58.400374 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 13:19:58.400385 | orchestrator | 2025-11-01 13:19:58.400396 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-11-01 13:19:58.400407 | orchestrator | Saturday 01 November 2025 13:16:22 +0000 (0:00:11.833) 0:01:35.742 ***** 2025-11-01 13:19:58.400427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.400446 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.400463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.400475 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.400486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.400497 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.400508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.400520 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.400531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.400542 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.400560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.400579 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.400590 | orchestrator | 2025-11-01 13:19:58.400601 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-11-01 13:19:58.400612 | orchestrator | Saturday 01 November 2025 13:16:26 +0000 (0:00:03.783) 0:01:39.525 ***** 2025-11-01 13:19:58.400622 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.400633 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.400650 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:19:58.400661 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:19:58.400671 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.400682 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:19:58.400693 | orchestrator | 2025-11-01 13:19:58.400703 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-11-01 13:19:58.400714 | orchestrator | Saturday 01 November 2025 13:16:31 +0000 (0:00:04.357) 0:01:43.882 ***** 2025-11-01 13:19:58.400725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.400736 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.400746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.400756 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.400766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.400776 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.400801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:19:58.400819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:19:58.400830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:19:58.400840 | orchestrator | 2025-11-01 13:19:58.400850 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-11-01 13:19:58.400860 | orchestrator | Saturday 01 November 2025 13:16:36 +0000 (0:00:04.987) 0:01:48.869 ***** 2025-11-01 13:19:58.400869 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.400879 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.400888 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.400898 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.400907 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.400917 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.400926 | orchestrator | 2025-11-01 13:19:58.400936 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-11-01 13:19:58.400945 | orchestrator | Saturday 01 November 2025 13:16:39 +0000 (0:00:02.879) 0:01:51.748 ***** 2025-11-01 13:19:58.400955 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.400964 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.400974 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.400983 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.400993 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.401002 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.401012 | orchestrator | 2025-11-01 13:19:58.401027 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-11-01 13:19:58.401037 | orchestrator | Saturday 01 November 2025 13:16:41 +0000 (0:00:02.753) 0:01:54.501 ***** 2025-11-01 13:19:58.401046 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.401056 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.401066 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.401075 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.401085 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.401094 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.401103 | orchestrator | 2025-11-01 13:19:58.401113 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-11-01 13:19:58.401123 | orchestrator | Saturday 01 November 2025 13:16:44 +0000 (0:00:02.650) 0:01:57.152 ***** 2025-11-01 13:19:58.401132 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.401142 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.401151 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.401161 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.401170 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.401179 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.401189 | orchestrator | 2025-11-01 13:19:58.401198 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-11-01 13:19:58.401208 | orchestrator | Saturday 01 November 2025 13:16:47 +0000 (0:00:03.507) 0:02:00.659 ***** 2025-11-01 13:19:58.401218 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.401227 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.401237 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.401246 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.401261 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.401271 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.401280 | orchestrator | 2025-11-01 13:19:58.401290 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-11-01 13:19:58.401300 | orchestrator | Saturday 01 November 2025 13:16:50 +0000 (0:00:02.658) 0:02:03.318 ***** 2025-11-01 13:19:58.401309 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.401335 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.401345 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.401354 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.401364 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.401373 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.401383 | orchestrator | 2025-11-01 13:19:58.401392 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-11-01 13:19:58.401402 | orchestrator | Saturday 01 November 2025 13:16:54 +0000 (0:00:03.622) 0:02:06.940 ***** 2025-11-01 13:19:58.401411 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-01 13:19:58.401421 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.401431 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-01 13:19:58.401440 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.401450 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-01 13:19:58.401464 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.401474 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-01 13:19:58.401483 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.401493 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-01 13:19:58.401502 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.401512 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-11-01 13:19:58.401521 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.401531 | orchestrator | 2025-11-01 13:19:58.401540 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-11-01 13:19:58.401556 | orchestrator | Saturday 01 November 2025 13:16:59 +0000 (0:00:05.655) 0:02:12.595 ***** 2025-11-01 13:19:58.401566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.401577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.401587 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.401596 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.401612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.401622 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.401637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.401647 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.401657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.401673 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.401683 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.401693 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.401702 | orchestrator | 2025-11-01 13:19:58.401712 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-11-01 13:19:58.401721 | orchestrator | Saturday 01 November 2025 13:17:03 +0000 (0:00:03.785) 0:02:16.380 ***** 2025-11-01 13:19:58.401731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.401741 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.401759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.401769 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.401784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.401803 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.401814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.401824 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.401833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.401843 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.401853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.401863 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.401873 | orchestrator | 2025-11-01 13:19:58.401882 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-11-01 13:19:58.401892 | orchestrator | Saturday 01 November 2025 13:17:08 +0000 (0:00:04.728) 0:02:21.109 ***** 2025-11-01 13:19:58.401902 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.401916 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.401926 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.401936 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.401945 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.401954 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.401964 | orchestrator | 2025-11-01 13:19:58.401973 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-11-01 13:19:58.401983 | orchestrator | Saturday 01 November 2025 13:17:12 +0000 (0:00:04.508) 0:02:25.617 ***** 2025-11-01 13:19:58.402000 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.402010 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.402047 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.402057 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:19:58.402067 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:19:58.402076 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:19:58.402085 | orchestrator | 2025-11-01 13:19:58.402095 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-11-01 13:19:58.402104 | orchestrator | Saturday 01 November 2025 13:17:20 +0000 (0:00:07.661) 0:02:33.279 ***** 2025-11-01 13:19:58.402114 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.402123 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.402133 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.402142 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.402152 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.402161 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.402171 | orchestrator | 2025-11-01 13:19:58.402186 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-11-01 13:19:58.402196 | orchestrator | Saturday 01 November 2025 13:17:24 +0000 (0:00:04.293) 0:02:37.572 ***** 2025-11-01 13:19:58.402206 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.402216 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.402225 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.402234 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.402244 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.402253 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.402262 | orchestrator | 2025-11-01 13:19:58.402272 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-11-01 13:19:58.402282 | orchestrator | Saturday 01 November 2025 13:17:30 +0000 (0:00:05.241) 0:02:42.814 ***** 2025-11-01 13:19:58.402291 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.402300 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.402310 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.402334 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.402344 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.402353 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.402363 | orchestrator | 2025-11-01 13:19:58.402373 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-11-01 13:19:58.402382 | orchestrator | Saturday 01 November 2025 13:17:35 +0000 (0:00:05.390) 0:02:48.205 ***** 2025-11-01 13:19:58.402392 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.402402 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.402411 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.402421 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.402430 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.402440 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.402449 | orchestrator | 2025-11-01 13:19:58.402459 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-11-01 13:19:58.402468 | orchestrator | Saturday 01 November 2025 13:17:40 +0000 (0:00:04.668) 0:02:52.873 ***** 2025-11-01 13:19:58.402478 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.402487 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.402497 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.402507 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.402516 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.402526 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.402535 | orchestrator | 2025-11-01 13:19:58.402545 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-11-01 13:19:58.402555 | orchestrator | Saturday 01 November 2025 13:17:45 +0000 (0:00:05.502) 0:02:58.376 ***** 2025-11-01 13:19:58.402565 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.402574 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.402584 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.402601 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.402610 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.402620 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.402629 | orchestrator | 2025-11-01 13:19:58.402639 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-11-01 13:19:58.402649 | orchestrator | Saturday 01 November 2025 13:17:50 +0000 (0:00:04.765) 0:03:03.142 ***** 2025-11-01 13:19:58.402658 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.402667 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.402677 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.402687 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.402696 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.402706 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.402715 | orchestrator | 2025-11-01 13:19:58.402725 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-11-01 13:19:58.402735 | orchestrator | Saturday 01 November 2025 13:17:55 +0000 (0:00:05.338) 0:03:08.480 ***** 2025-11-01 13:19:58.402744 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-01 13:19:58.402754 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-01 13:19:58.402764 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.402773 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.402783 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-01 13:19:58.402793 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.402802 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-01 13:19:58.402812 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.402924 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-01 13:19:58.402937 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.402947 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-11-01 13:19:58.402957 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.402967 | orchestrator | 2025-11-01 13:19:58.402976 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-11-01 13:19:58.402986 | orchestrator | Saturday 01 November 2025 13:17:59 +0000 (0:00:03.928) 0:03:12.409 ***** 2025-11-01 13:19:58.403002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.403012 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.403022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.403039 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.403050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-11-01 13:19:58.403060 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.403070 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.403080 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.403096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.403107 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.403122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-11-01 13:19:58.403132 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.403141 | orchestrator | 2025-11-01 13:19:58.403151 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-11-01 13:19:58.403167 | orchestrator | Saturday 01 November 2025 13:18:03 +0000 (0:00:03.610) 0:03:16.019 ***** 2025-11-01 13:19:58.403178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:19:58.403189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:19:58.403204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-11-01 13:19:58.403215 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 13:19:58.403233 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 13:19:58.403249 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-11-01 13:19:58.403259 | orchestrator | 2025-11-01 13:19:58.403268 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-11-01 13:19:58.403278 | orchestrator | Saturday 01 November 2025 13:18:07 +0000 (0:00:04.341) 0:03:20.361 ***** 2025-11-01 13:19:58.403288 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:19:58.403298 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:19:58.403307 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:19:58.403365 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:19:58.403376 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:19:58.403386 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:19:58.403395 | orchestrator | 2025-11-01 13:19:58.403405 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-11-01 13:19:58.403414 | orchestrator | Saturday 01 November 2025 13:18:08 +0000 (0:00:00.554) 0:03:20.916 ***** 2025-11-01 13:19:58.403424 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:19:58.403434 | orchestrator | 2025-11-01 13:19:58.403443 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-11-01 13:19:58.403453 | orchestrator | Saturday 01 November 2025 13:18:10 +0000 (0:00:02.661) 0:03:23.577 ***** 2025-11-01 13:19:58.403463 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:19:58.403472 | orchestrator | 2025-11-01 13:19:58.403482 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-11-01 13:19:58.403491 | orchestrator | Saturday 01 November 2025 13:18:13 +0000 (0:00:02.931) 0:03:26.509 ***** 2025-11-01 13:19:58.403501 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:19:58.403510 | orchestrator | 2025-11-01 13:19:58.403520 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-01 13:19:58.403529 | orchestrator | Saturday 01 November 2025 13:18:59 +0000 (0:00:45.962) 0:04:12.472 ***** 2025-11-01 13:19:58.403539 | orchestrator | 2025-11-01 13:19:58.403549 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-01 13:19:58.403560 | orchestrator | Saturday 01 November 2025 13:18:59 +0000 (0:00:00.150) 0:04:12.623 ***** 2025-11-01 13:19:58.403571 | orchestrator | 2025-11-01 13:19:58.403581 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-01 13:19:58.403592 | orchestrator | Saturday 01 November 2025 13:19:00 +0000 (0:00:01.054) 0:04:13.677 ***** 2025-11-01 13:19:58.403603 | orchestrator | 2025-11-01 13:19:58.403614 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-01 13:19:58.403625 | orchestrator | Saturday 01 November 2025 13:19:01 +0000 (0:00:00.405) 0:04:14.082 ***** 2025-11-01 13:19:58.403636 | orchestrator | 2025-11-01 13:19:58.403651 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-01 13:19:58.403663 | orchestrator | Saturday 01 November 2025 13:19:01 +0000 (0:00:00.373) 0:04:14.456 ***** 2025-11-01 13:19:58.403673 | orchestrator | 2025-11-01 13:19:58.403684 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-11-01 13:19:58.403700 | orchestrator | Saturday 01 November 2025 13:19:01 +0000 (0:00:00.218) 0:04:14.675 ***** 2025-11-01 13:19:58.403711 | orchestrator | 2025-11-01 13:19:58.403722 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-11-01 13:19:58.403733 | orchestrator | Saturday 01 November 2025 13:19:02 +0000 (0:00:00.188) 0:04:14.863 ***** 2025-11-01 13:19:58.403744 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:19:58.403755 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:19:58.403788 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:19:58.403799 | orchestrator | 2025-11-01 13:19:58.403810 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-11-01 13:19:58.403820 | orchestrator | Saturday 01 November 2025 13:19:29 +0000 (0:00:27.029) 0:04:41.892 ***** 2025-11-01 13:19:58.403829 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:19:58.403838 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:19:58.403847 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:19:58.403856 | orchestrator | 2025-11-01 13:19:58.403865 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:19:58.403878 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-01 13:19:58.403888 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-11-01 13:19:58.403897 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-11-01 13:19:58.403906 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-01 13:19:58.403914 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-01 13:19:58.403922 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-11-01 13:19:58.403929 | orchestrator | 2025-11-01 13:19:58.403937 | orchestrator | 2025-11-01 13:19:58.403945 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:19:58.403953 | orchestrator | Saturday 01 November 2025 13:19:57 +0000 (0:00:28.335) 0:05:10.228 ***** 2025-11-01 13:19:58.403961 | orchestrator | =============================================================================== 2025-11-01 13:19:58.403969 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 45.96s 2025-11-01 13:19:58.403976 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 28.34s 2025-11-01 13:19:58.403984 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.03s 2025-11-01 13:19:58.403992 | orchestrator | neutron : Copying over neutron.conf ------------------------------------ 11.83s 2025-11-01 13:19:58.404000 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.88s 2025-11-01 13:19:58.404007 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 7.66s 2025-11-01 13:19:58.404015 | orchestrator | neutron : Copying over config.json files for services ------------------- 7.56s 2025-11-01 13:19:58.404023 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.44s 2025-11-01 13:19:58.404031 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 5.66s 2025-11-01 13:19:58.404039 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 5.50s 2025-11-01 13:19:58.404046 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 5.42s 2025-11-01 13:19:58.404054 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 5.39s 2025-11-01 13:19:58.404062 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 5.39s 2025-11-01 13:19:58.404075 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 5.34s 2025-11-01 13:19:58.404083 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 5.24s 2025-11-01 13:19:58.404090 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.99s 2025-11-01 13:19:58.404098 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 4.77s 2025-11-01 13:19:58.404115 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.73s 2025-11-01 13:19:58.404123 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 4.73s 2025-11-01 13:19:58.404131 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 4.67s 2025-11-01 13:19:58.404139 | orchestrator | 2025-11-01 13:19:58 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state STARTED 2025-11-01 13:19:58.404223 | orchestrator | 2025-11-01 13:19:58 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:19:58.404235 | orchestrator | 2025-11-01 13:19:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:20:01.446406 | orchestrator | 2025-11-01 13:20:01 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:20:01.446665 | orchestrator | 2025-11-01 13:20:01 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:20:01.447341 | orchestrator | 2025-11-01 13:20:01 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:20:01.449527 | orchestrator | 2025-11-01 13:20:01 | INFO  | Task 7f4d5ef5-e0fb-4f4e-b3bd-3739e64e104d is in state SUCCESS 2025-11-01 13:20:01.451603 | orchestrator | 2025-11-01 13:20:01.451637 | orchestrator | 2025-11-01 13:20:01.451649 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:20:01.451661 | orchestrator | 2025-11-01 13:20:01.451672 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:20:01.451684 | orchestrator | Saturday 01 November 2025 13:16:19 +0000 (0:00:00.784) 0:00:00.785 ***** 2025-11-01 13:20:01.451695 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:20:01.451708 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:20:01.451718 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:20:01.451729 | orchestrator | 2025-11-01 13:20:01.451740 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:20:01.451752 | orchestrator | Saturday 01 November 2025 13:16:20 +0000 (0:00:00.540) 0:00:01.325 ***** 2025-11-01 13:20:01.451780 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-11-01 13:20:01.451792 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-11-01 13:20:01.451803 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-11-01 13:20:01.451813 | orchestrator | 2025-11-01 13:20:01.451824 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-11-01 13:20:01.451835 | orchestrator | 2025-11-01 13:20:01.451846 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-11-01 13:20:01.451858 | orchestrator | Saturday 01 November 2025 13:16:21 +0000 (0:00:01.323) 0:00:02.648 ***** 2025-11-01 13:20:01.451869 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:20:01.451881 | orchestrator | 2025-11-01 13:20:01.451892 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-11-01 13:20:01.451903 | orchestrator | Saturday 01 November 2025 13:16:22 +0000 (0:00:00.761) 0:00:03.410 ***** 2025-11-01 13:20:01.451914 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-11-01 13:20:01.451925 | orchestrator | 2025-11-01 13:20:01.451936 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-11-01 13:20:01.451946 | orchestrator | Saturday 01 November 2025 13:16:26 +0000 (0:00:03.975) 0:00:07.386 ***** 2025-11-01 13:20:01.451957 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-11-01 13:20:01.451988 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-11-01 13:20:01.451999 | orchestrator | 2025-11-01 13:20:01.452010 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-11-01 13:20:01.452021 | orchestrator | Saturday 01 November 2025 13:16:34 +0000 (0:00:07.617) 0:00:15.003 ***** 2025-11-01 13:20:01.452032 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-01 13:20:01.452043 | orchestrator | 2025-11-01 13:20:01.452054 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-11-01 13:20:01.452065 | orchestrator | Saturday 01 November 2025 13:16:38 +0000 (0:00:04.281) 0:00:19.285 ***** 2025-11-01 13:20:01.452075 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-01 13:20:01.452086 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-11-01 13:20:01.452097 | orchestrator | 2025-11-01 13:20:01.452108 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-11-01 13:20:01.452118 | orchestrator | Saturday 01 November 2025 13:16:42 +0000 (0:00:04.565) 0:00:23.850 ***** 2025-11-01 13:20:01.452129 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-01 13:20:01.452140 | orchestrator | 2025-11-01 13:20:01.452151 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-11-01 13:20:01.452162 | orchestrator | Saturday 01 November 2025 13:16:47 +0000 (0:00:04.322) 0:00:28.172 ***** 2025-11-01 13:20:01.452173 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-11-01 13:20:01.452184 | orchestrator | 2025-11-01 13:20:01.452194 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-11-01 13:20:01.452205 | orchestrator | Saturday 01 November 2025 13:16:51 +0000 (0:00:04.792) 0:00:32.965 ***** 2025-11-01 13:20:01.452220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 13:20:01.452247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 13:20:01.452265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 13:20:01.452306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.452340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.452366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.452379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.452428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.452447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.452478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.452491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.452502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.452514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.452526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.452611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.452629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.452648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.452659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.452671 | orchestrator | 2025-11-01 13:20:01.452682 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-11-01 13:20:01.452693 | orchestrator | Saturday 01 November 2025 13:16:56 +0000 (0:00:04.912) 0:00:37.878 ***** 2025-11-01 13:20:01.452704 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:20:01.452715 | orchestrator | 2025-11-01 13:20:01.452726 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-11-01 13:20:01.452748 | orchestrator | Saturday 01 November 2025 13:16:57 +0000 (0:00:00.472) 0:00:38.351 ***** 2025-11-01 13:20:01.452759 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:20:01.452770 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:20:01.452781 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:20:01.452792 | orchestrator | 2025-11-01 13:20:01.452802 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-11-01 13:20:01.452813 | orchestrator | Saturday 01 November 2025 13:16:58 +0000 (0:00:01.165) 0:00:39.517 ***** 2025-11-01 13:20:01.452824 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:20:01.452846 | orchestrator | 2025-11-01 13:20:01.452857 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-11-01 13:20:01.452888 | orchestrator | Saturday 01 November 2025 13:17:00 +0000 (0:00:01.564) 0:00:41.081 ***** 2025-11-01 13:20:01.452900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 13:20:01.452932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 13:20:01.452945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 13:20:01.452956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.452968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.452979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.452991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.453026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.453038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.453050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.453061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.453072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.453084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.453101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.453124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.453136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.453148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.453159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.453170 | orchestrator | 2025-11-01 13:20:01.453182 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-11-01 13:20:01.453193 | orchestrator | Saturday 01 November 2025 13:17:08 +0000 (0:00:08.545) 0:00:49.627 ***** 2025-11-01 13:20:01.453204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 13:20:01.453222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 13:20:01.453244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.453256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.453267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.453279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.453290 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:20:01.453302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 13:20:01.453339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 13:20:01.453597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.453620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.453631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.453643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.453654 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:20:01.453665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 13:20:01.453686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 13:20:01.453706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.453722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.453734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.453745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.453757 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:20:01.453767 | orchestrator | 2025-11-01 13:20:01.453779 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-11-01 13:20:01.453789 | orchestrator | Saturday 01 November 2025 13:17:11 +0000 (0:00:03.062) 0:00:52.690 ***** 2025-11-01 13:20:01.453801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 13:20:01.453822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 13:20:01.453838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.453855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.453867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.453878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.453889 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:20:01.453901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 13:20:01.453952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 13:20:01.453980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.454090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.454109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.454121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.454132 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:20:01.454144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 13:20:01.454164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 13:20:01.454175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.454194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.454215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.454229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.454242 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:20:01.454255 | orchestrator | 2025-11-01 13:20:01.454267 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-11-01 13:20:01.454280 | orchestrator | Saturday 01 November 2025 13:17:14 +0000 (0:00:02.664) 0:00:55.354 ***** 2025-11-01 13:20:01.454293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 13:20:01.454369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 13:20:01.454507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 13:20:01.454530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.454545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.454558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.454579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.454591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.454608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.454625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.454636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.454648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.454667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.454678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.454689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.454706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.454722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.454734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.454745 | orchestrator | 2025-11-01 13:20:01.454756 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-11-01 13:20:01.454767 | orchestrator | Saturday 01 November 2025 13:17:23 +0000 (0:00:09.083) 0:01:04.438 ***** 2025-11-01 13:20:01.454785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 13:20:01.454797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 13:20:01.454809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 13:20:01.455299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455556 | orchestrator | 2025-11-01 13:20:01.455567 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-11-01 13:20:01.455585 | orchestrator | Saturday 01 November 2025 13:17:57 +0000 (0:00:34.504) 0:01:38.942 ***** 2025-11-01 13:20:01.455596 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-11-01 13:20:01.455607 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-11-01 13:20:01.455618 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-11-01 13:20:01.455629 | orchestrator | 2025-11-01 13:20:01.455640 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-11-01 13:20:01.455651 | orchestrator | Saturday 01 November 2025 13:18:05 +0000 (0:00:07.588) 0:01:46.531 ***** 2025-11-01 13:20:01.455661 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-11-01 13:20:01.455672 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-11-01 13:20:01.455683 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-11-01 13:20:01.455693 | orchestrator | 2025-11-01 13:20:01.455704 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-11-01 13:20:01.455715 | orchestrator | Saturday 01 November 2025 13:18:08 +0000 (0:00:03.001) 0:01:49.533 ***** 2025-11-01 13:20:01.455726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 13:20:01.455738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 13:20:01.455755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 13:20:01.455782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.455805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.455816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.455828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.455856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.455878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.455890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.455913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.455924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.455935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.455985 | orchestrator | 2025-11-01 13:20:01.455996 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-11-01 13:20:01.456007 | orchestrator | Saturday 01 November 2025 13:18:11 +0000 (0:00:03.253) 0:01:52.787 ***** 2025-11-01 13:20:01.456018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 13:20:01.456030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 13:20:01.456041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 13:20:01.456065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_per2025-11-01 13:20:01 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:20:01.456258 | orchestrator | 2025-11-01 13:20:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:20:01.456269 | orchestrator | iod': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456297 | orchestrator | 2025-11-01 13:20:01.456308 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-11-01 13:20:01.456335 | orchestrator | Saturday 01 November 2025 13:18:14 +0000 (0:00:02.799) 0:01:55.587 ***** 2025-11-01 13:20:01.456346 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:20:01.456357 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:20:01.456368 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:20:01.456379 | orchestrator | 2025-11-01 13:20:01.456389 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-11-01 13:20:01.456400 | orchestrator | Saturday 01 November 2025 13:18:15 +0000 (0:00:00.872) 0:01:56.459 ***** 2025-11-01 13:20:01.456411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 13:20:01.456423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 13:20:01.456435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456499 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:20:01.456510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 13:20:01.456521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 13:20:01.456533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456595 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:20:01.456606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-11-01 13:20:01.456618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-11-01 13:20:01.456638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:20:01.456696 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:20:01.456707 | orchestrator | 2025-11-01 13:20:01.456718 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-11-01 13:20:01.456729 | orchestrator | Saturday 01 November 2025 13:18:16 +0000 (0:00:01.285) 0:01:57.744 ***** 2025-11-01 13:20:01.456740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 13:20:01.456752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 13:20:01.456769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-11-01 13:20:01.456785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:20:01.456991 | orchestrator | 2025-11-01 13:20:01.457002 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-11-01 13:20:01.457013 | orchestrator | Saturday 01 November 2025 13:18:21 +0000 (0:00:04.569) 0:02:02.314 ***** 2025-11-01 13:20:01.457024 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:20:01.457039 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:20:01.457050 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:20:01.457061 | orchestrator | 2025-11-01 13:20:01.457072 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-11-01 13:20:01.457082 | orchestrator | Saturday 01 November 2025 13:18:21 +0000 (0:00:00.347) 0:02:02.662 ***** 2025-11-01 13:20:01.457093 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-11-01 13:20:01.457104 | orchestrator | 2025-11-01 13:20:01.457114 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-11-01 13:20:01.457125 | orchestrator | Saturday 01 November 2025 13:18:24 +0000 (0:00:02.338) 0:02:05.000 ***** 2025-11-01 13:20:01.457136 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-01 13:20:01.457147 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-11-01 13:20:01.457158 | orchestrator | 2025-11-01 13:20:01.457168 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-11-01 13:20:01.457179 | orchestrator | Saturday 01 November 2025 13:18:26 +0000 (0:00:02.518) 0:02:07.519 ***** 2025-11-01 13:20:01.457196 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:20:01.457206 | orchestrator | 2025-11-01 13:20:01.457217 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-11-01 13:20:01.457228 | orchestrator | Saturday 01 November 2025 13:18:47 +0000 (0:00:20.779) 0:02:28.298 ***** 2025-11-01 13:20:01.457238 | orchestrator | 2025-11-01 13:20:01.457249 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-11-01 13:20:01.457260 | orchestrator | Saturday 01 November 2025 13:18:47 +0000 (0:00:00.358) 0:02:28.657 ***** 2025-11-01 13:20:01.457270 | orchestrator | 2025-11-01 13:20:01.457281 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-11-01 13:20:01.457292 | orchestrator | Saturday 01 November 2025 13:18:47 +0000 (0:00:00.124) 0:02:28.781 ***** 2025-11-01 13:20:01.457302 | orchestrator | 2025-11-01 13:20:01.457313 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-11-01 13:20:01.457374 | orchestrator | Saturday 01 November 2025 13:18:47 +0000 (0:00:00.163) 0:02:28.945 ***** 2025-11-01 13:20:01.457384 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:20:01.457395 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:20:01.457406 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:20:01.457417 | orchestrator | 2025-11-01 13:20:01.457427 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-11-01 13:20:01.457438 | orchestrator | Saturday 01 November 2025 13:18:57 +0000 (0:00:09.966) 0:02:38.912 ***** 2025-11-01 13:20:01.457449 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:20:01.457459 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:20:01.457470 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:20:01.457480 | orchestrator | 2025-11-01 13:20:01.457491 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-11-01 13:20:01.457502 | orchestrator | Saturday 01 November 2025 13:19:06 +0000 (0:00:08.805) 0:02:47.717 ***** 2025-11-01 13:20:01.457512 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:20:01.457523 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:20:01.457534 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:20:01.457544 | orchestrator | 2025-11-01 13:20:01.457555 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-11-01 13:20:01.457566 | orchestrator | Saturday 01 November 2025 13:19:16 +0000 (0:00:09.558) 0:02:57.276 ***** 2025-11-01 13:20:01.457576 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:20:01.457587 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:20:01.457598 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:20:01.457608 | orchestrator | 2025-11-01 13:20:01.457619 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-11-01 13:20:01.457630 | orchestrator | Saturday 01 November 2025 13:19:29 +0000 (0:00:13.601) 0:03:10.877 ***** 2025-11-01 13:20:01.457641 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:20:01.457651 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:20:01.457662 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:20:01.457672 | orchestrator | 2025-11-01 13:20:01.457683 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-11-01 13:20:01.457694 | orchestrator | Saturday 01 November 2025 13:19:45 +0000 (0:00:15.537) 0:03:26.414 ***** 2025-11-01 13:20:01.457704 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:20:01.457715 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:20:01.457726 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:20:01.457736 | orchestrator | 2025-11-01 13:20:01.457747 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-11-01 13:20:01.457758 | orchestrator | Saturday 01 November 2025 13:19:51 +0000 (0:00:06.352) 0:03:32.767 ***** 2025-11-01 13:20:01.457768 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:20:01.457779 | orchestrator | 2025-11-01 13:20:01.457790 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:20:01.457802 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-01 13:20:01.457820 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 13:20:01.457837 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 13:20:01.457848 | orchestrator | 2025-11-01 13:20:01.457859 | orchestrator | 2025-11-01 13:20:01.457870 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:20:01.457881 | orchestrator | Saturday 01 November 2025 13:19:59 +0000 (0:00:07.545) 0:03:40.313 ***** 2025-11-01 13:20:01.457891 | orchestrator | =============================================================================== 2025-11-01 13:20:01.457902 | orchestrator | designate : Copying over designate.conf -------------------------------- 34.50s 2025-11-01 13:20:01.457912 | orchestrator | designate : Running Designate bootstrap container ---------------------- 20.78s 2025-11-01 13:20:01.457928 | orchestrator | designate : Restart designate-mdns container --------------------------- 15.54s 2025-11-01 13:20:01.457939 | orchestrator | designate : Restart designate-producer container ----------------------- 13.60s 2025-11-01 13:20:01.457950 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 9.97s 2025-11-01 13:20:01.457960 | orchestrator | designate : Restart designate-central container ------------------------- 9.56s 2025-11-01 13:20:01.457971 | orchestrator | designate : Copying over config.json files for services ----------------- 9.09s 2025-11-01 13:20:01.457982 | orchestrator | designate : Restart designate-api container ----------------------------- 8.81s 2025-11-01 13:20:01.457992 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 8.55s 2025-11-01 13:20:01.458003 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.62s 2025-11-01 13:20:01.458014 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.59s 2025-11-01 13:20:01.458057 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.55s 2025-11-01 13:20:01.458067 | orchestrator | designate : Restart designate-worker container -------------------------- 6.35s 2025-11-01 13:20:01.458078 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.91s 2025-11-01 13:20:01.458089 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.79s 2025-11-01 13:20:01.458099 | orchestrator | designate : Check designate containers ---------------------------------- 4.57s 2025-11-01 13:20:01.458110 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.57s 2025-11-01 13:20:01.458121 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 4.32s 2025-11-01 13:20:01.458131 | orchestrator | service-ks-register : designate | Creating projects --------------------- 4.28s 2025-11-01 13:20:01.458142 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.98s 2025-11-01 13:20:04.485730 | orchestrator | 2025-11-01 13:20:04 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:20:04.487606 | orchestrator | 2025-11-01 13:20:04 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:20:04.488627 | orchestrator | 2025-11-01 13:20:04 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:20:04.490181 | orchestrator | 2025-11-01 13:20:04 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:20:04.490454 | orchestrator | 2025-11-01 13:20:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:20:07.527206 | orchestrator | 2025-11-01 13:20:07 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:20:07.527414 | orchestrator | 2025-11-01 13:20:07 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:20:07.529667 | orchestrator | 2025-11-01 13:20:07 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:20:07.531514 | orchestrator | 2025-11-01 13:20:07 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:20:07.531539 | orchestrator | 2025-11-01 13:20:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:20:10.570450 | orchestrator | 2025-11-01 13:20:10 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:20:10.571087 | orchestrator | 2025-11-01 13:20:10 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:20:10.572438 | orchestrator | 2025-11-01 13:20:10 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:20:10.573658 | orchestrator | 2025-11-01 13:20:10 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:20:10.573834 | orchestrator | 2025-11-01 13:20:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:20:13.618625 | orchestrator | 2025-11-01 13:20:13 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:20:13.621091 | orchestrator | 2025-11-01 13:20:13 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:20:13.621978 | orchestrator | 2025-11-01 13:20:13 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:20:13.624397 | orchestrator | 2025-11-01 13:20:13 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:20:13.624426 | orchestrator | 2025-11-01 13:20:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:20:16.688429 | orchestrator | 2025-11-01 13:20:16 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:20:16.690775 | orchestrator | 2025-11-01 13:20:16 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:20:16.692563 | orchestrator | 2025-11-01 13:20:16 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:20:16.694358 | orchestrator | 2025-11-01 13:20:16 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:20:16.694745 | orchestrator | 2025-11-01 13:20:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:20:19.734458 | orchestrator | 2025-11-01 13:20:19 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:20:19.735546 | orchestrator | 2025-11-01 13:20:19 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:20:19.737091 | orchestrator | 2025-11-01 13:20:19 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:20:19.738627 | orchestrator | 2025-11-01 13:20:19 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:20:19.738650 | orchestrator | 2025-11-01 13:20:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:20:22.779773 | orchestrator | 2025-11-01 13:20:22 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:20:22.781103 | orchestrator | 2025-11-01 13:20:22 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:20:22.785394 | orchestrator | 2025-11-01 13:20:22 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:20:22.787999 | orchestrator | 2025-11-01 13:20:22 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:20:22.788023 | orchestrator | 2025-11-01 13:20:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:20:25.833421 | orchestrator | 2025-11-01 13:20:25 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:20:25.834269 | orchestrator | 2025-11-01 13:20:25 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:20:25.835520 | orchestrator | 2025-11-01 13:20:25 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:20:25.836992 | orchestrator | 2025-11-01 13:20:25 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:20:25.837010 | orchestrator | 2025-11-01 13:20:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:20:28.871097 | orchestrator | 2025-11-01 13:20:28 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:20:28.871440 | orchestrator | 2025-11-01 13:20:28 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:20:28.872964 | orchestrator | 2025-11-01 13:20:28 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:20:28.874788 | orchestrator | 2025-11-01 13:20:28 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:20:28.874845 | orchestrator | 2025-11-01 13:20:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:20:31.908965 | orchestrator | 2025-11-01 13:20:31 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:20:31.911278 | orchestrator | 2025-11-01 13:20:31 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:20:31.911332 | orchestrator | 2025-11-01 13:20:31 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:20:31.913709 | orchestrator | 2025-11-01 13:20:31 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:20:31.913719 | orchestrator | 2025-11-01 13:20:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:20:34.950005 | orchestrator | 2025-11-01 13:20:34 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:20:34.950813 | orchestrator | 2025-11-01 13:20:34 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:20:34.952952 | orchestrator | 2025-11-01 13:20:34 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:20:34.954790 | orchestrator | 2025-11-01 13:20:34 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:20:34.954880 | orchestrator | 2025-11-01 13:20:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:20:37.998682 | orchestrator | 2025-11-01 13:20:37 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:20:38.010777 | orchestrator | 2025-11-01 13:20:38 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:20:38.011916 | orchestrator | 2025-11-01 13:20:38 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:20:38.013111 | orchestrator | 2025-11-01 13:20:38 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:20:38.013134 | orchestrator | 2025-11-01 13:20:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:20:41.047048 | orchestrator | 2025-11-01 13:20:41 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:20:41.047140 | orchestrator | 2025-11-01 13:20:41 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:20:41.048233 | orchestrator | 2025-11-01 13:20:41 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:20:41.049975 | orchestrator | 2025-11-01 13:20:41 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:20:41.050116 | orchestrator | 2025-11-01 13:20:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:20:44.095649 | orchestrator | 2025-11-01 13:20:44 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:20:44.095759 | orchestrator | 2025-11-01 13:20:44 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:20:44.096018 | orchestrator | 2025-11-01 13:20:44 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:20:44.097699 | orchestrator | 2025-11-01 13:20:44 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:20:44.097719 | orchestrator | 2025-11-01 13:20:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:20:47.136023 | orchestrator | 2025-11-01 13:20:47 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:20:47.138170 | orchestrator | 2025-11-01 13:20:47 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:20:47.138206 | orchestrator | 2025-11-01 13:20:47 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:20:47.139304 | orchestrator | 2025-11-01 13:20:47 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:20:47.139352 | orchestrator | 2025-11-01 13:20:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:20:50.172910 | orchestrator | 2025-11-01 13:20:50 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:20:50.173637 | orchestrator | 2025-11-01 13:20:50 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:20:50.174888 | orchestrator | 2025-11-01 13:20:50 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:20:50.176064 | orchestrator | 2025-11-01 13:20:50 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:20:50.176086 | orchestrator | 2025-11-01 13:20:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:20:53.242632 | orchestrator | 2025-11-01 13:20:53 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:20:53.244122 | orchestrator | 2025-11-01 13:20:53 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:20:53.244558 | orchestrator | 2025-11-01 13:20:53 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:20:53.246172 | orchestrator | 2025-11-01 13:20:53 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:20:53.246289 | orchestrator | 2025-11-01 13:20:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:20:56.294307 | orchestrator | 2025-11-01 13:20:56 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:20:56.294454 | orchestrator | 2025-11-01 13:20:56 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:20:56.294466 | orchestrator | 2025-11-01 13:20:56 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:20:56.294475 | orchestrator | 2025-11-01 13:20:56 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:20:56.294484 | orchestrator | 2025-11-01 13:20:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:20:59.332422 | orchestrator | 2025-11-01 13:20:59 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:20:59.333003 | orchestrator | 2025-11-01 13:20:59 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:20:59.334382 | orchestrator | 2025-11-01 13:20:59 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:20:59.336959 | orchestrator | 2025-11-01 13:20:59 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:20:59.336998 | orchestrator | 2025-11-01 13:20:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:21:02.381787 | orchestrator | 2025-11-01 13:21:02 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:21:02.383473 | orchestrator | 2025-11-01 13:21:02 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:21:02.385707 | orchestrator | 2025-11-01 13:21:02 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:21:02.387496 | orchestrator | 2025-11-01 13:21:02 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:21:02.387546 | orchestrator | 2025-11-01 13:21:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:21:05.428213 | orchestrator | 2025-11-01 13:21:05 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:21:05.429704 | orchestrator | 2025-11-01 13:21:05 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:21:05.429716 | orchestrator | 2025-11-01 13:21:05 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state STARTED 2025-11-01 13:21:05.429745 | orchestrator | 2025-11-01 13:21:05 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:21:05.429756 | orchestrator | 2025-11-01 13:21:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:21:08.466672 | orchestrator | 2025-11-01 13:21:08 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:21:08.466770 | orchestrator | 2025-11-01 13:21:08 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:21:08.469893 | orchestrator | 2025-11-01 13:21:08 | INFO  | Task a35b5ed0-e693-4b86-942c-fe36a6ff65b9 is in state SUCCESS 2025-11-01 13:21:08.472848 | orchestrator | 2025-11-01 13:21:08.472986 | orchestrator | 2025-11-01 13:21:08.473003 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:21:08.473016 | orchestrator | 2025-11-01 13:21:08.473027 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:21:08.473039 | orchestrator | Saturday 01 November 2025 13:20:02 +0000 (0:00:00.275) 0:00:00.275 ***** 2025-11-01 13:21:08.473050 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:21:08.473084 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:21:08.473096 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:21:08.473107 | orchestrator | 2025-11-01 13:21:08.473118 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:21:08.473145 | orchestrator | Saturday 01 November 2025 13:20:02 +0000 (0:00:00.359) 0:00:00.635 ***** 2025-11-01 13:21:08.473158 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-11-01 13:21:08.473170 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-11-01 13:21:08.473181 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-11-01 13:21:08.473192 | orchestrator | 2025-11-01 13:21:08.473203 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-11-01 13:21:08.473215 | orchestrator | 2025-11-01 13:21:08.473227 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-11-01 13:21:08.473238 | orchestrator | Saturday 01 November 2025 13:20:03 +0000 (0:00:00.461) 0:00:01.096 ***** 2025-11-01 13:21:08.473249 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:21:08.473262 | orchestrator | 2025-11-01 13:21:08.473273 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-11-01 13:21:08.473284 | orchestrator | Saturday 01 November 2025 13:20:03 +0000 (0:00:00.534) 0:00:01.630 ***** 2025-11-01 13:21:08.473345 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-11-01 13:21:08.473357 | orchestrator | 2025-11-01 13:21:08.473368 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-11-01 13:21:08.473379 | orchestrator | Saturday 01 November 2025 13:20:06 +0000 (0:00:03.326) 0:00:04.957 ***** 2025-11-01 13:21:08.473390 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-11-01 13:21:08.473401 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-11-01 13:21:08.473412 | orchestrator | 2025-11-01 13:21:08.473423 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-11-01 13:21:08.473434 | orchestrator | Saturday 01 November 2025 13:20:13 +0000 (0:00:06.286) 0:00:11.243 ***** 2025-11-01 13:21:08.473445 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-01 13:21:08.473456 | orchestrator | 2025-11-01 13:21:08.473467 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-11-01 13:21:08.473477 | orchestrator | Saturday 01 November 2025 13:20:16 +0000 (0:00:03.229) 0:00:14.473 ***** 2025-11-01 13:21:08.473488 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-01 13:21:08.473499 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-11-01 13:21:08.473510 | orchestrator | 2025-11-01 13:21:08.473520 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-11-01 13:21:08.473531 | orchestrator | Saturday 01 November 2025 13:20:20 +0000 (0:00:03.840) 0:00:18.314 ***** 2025-11-01 13:21:08.473542 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-01 13:21:08.473553 | orchestrator | 2025-11-01 13:21:08.473564 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-11-01 13:21:08.473592 | orchestrator | Saturday 01 November 2025 13:20:23 +0000 (0:00:03.568) 0:00:21.883 ***** 2025-11-01 13:21:08.473604 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-11-01 13:21:08.473614 | orchestrator | 2025-11-01 13:21:08.473628 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-11-01 13:21:08.473641 | orchestrator | Saturday 01 November 2025 13:20:27 +0000 (0:00:03.735) 0:00:25.618 ***** 2025-11-01 13:21:08.473654 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:21:08.473666 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:21:08.473679 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:21:08.473691 | orchestrator | 2025-11-01 13:21:08.473703 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-11-01 13:21:08.473715 | orchestrator | Saturday 01 November 2025 13:20:27 +0000 (0:00:00.328) 0:00:25.946 ***** 2025-11-01 13:21:08.473732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 13:21:08.473767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 13:21:08.473791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 13:21:08.473804 | orchestrator | 2025-11-01 13:21:08.473817 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-11-01 13:21:08.473830 | orchestrator | Saturday 01 November 2025 13:20:28 +0000 (0:00:00.926) 0:00:26.872 ***** 2025-11-01 13:21:08.473841 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:21:08.473854 | orchestrator | 2025-11-01 13:21:08.473867 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-11-01 13:21:08.473879 | orchestrator | Saturday 01 November 2025 13:20:28 +0000 (0:00:00.129) 0:00:27.001 ***** 2025-11-01 13:21:08.473891 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:21:08.473903 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:21:08.473916 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:21:08.473930 | orchestrator | 2025-11-01 13:21:08.473942 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-11-01 13:21:08.473955 | orchestrator | Saturday 01 November 2025 13:20:29 +0000 (0:00:00.542) 0:00:27.544 ***** 2025-11-01 13:21:08.473968 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:21:08.473981 | orchestrator | 2025-11-01 13:21:08.473997 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-11-01 13:21:08.474008 | orchestrator | Saturday 01 November 2025 13:20:30 +0000 (0:00:00.611) 0:00:28.155 ***** 2025-11-01 13:21:08.474075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 13:21:08.474100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 13:21:08.474120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 13:21:08.474132 | orchestrator | 2025-11-01 13:21:08.474143 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-11-01 13:21:08.474154 | orchestrator | Saturday 01 November 2025 13:20:31 +0000 (0:00:01.438) 0:00:29.594 ***** 2025-11-01 13:21:08.474165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 13:21:08.474177 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:21:08.474194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 13:21:08.474206 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:21:08.474223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 13:21:08.474242 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:21:08.474253 | orchestrator | 2025-11-01 13:21:08.474264 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-11-01 13:21:08.474275 | orchestrator | Saturday 01 November 2025 13:20:32 +0000 (0:00:01.079) 0:00:30.674 ***** 2025-11-01 13:21:08.474286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 13:21:08.474298 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:21:08.474309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 13:21:08.474352 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:21:08.474371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 13:21:08.474383 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:21:08.474394 | orchestrator | 2025-11-01 13:21:08.474404 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-11-01 13:21:08.474422 | orchestrator | Saturday 01 November 2025 13:20:33 +0000 (0:00:00.857) 0:00:31.531 ***** 2025-11-01 13:21:08.474438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 13:21:08.474451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 13:21:08.474462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 13:21:08.474474 | orchestrator | 2025-11-01 13:21:08.474485 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-11-01 13:21:08.474496 | orchestrator | Saturday 01 November 2025 13:20:34 +0000 (0:00:01.307) 0:00:32.839 ***** 2025-11-01 13:21:08.474512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 13:21:08.474532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 13:21:08.475915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 13:21:08.475942 | orchestrator | 2025-11-01 13:21:08.475954 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-11-01 13:21:08.475966 | orchestrator | Saturday 01 November 2025 13:20:37 +0000 (0:00:02.802) 0:00:35.642 ***** 2025-11-01 13:21:08.475977 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-11-01 13:21:08.475989 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-11-01 13:21:08.476000 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-11-01 13:21:08.476011 | orchestrator | 2025-11-01 13:21:08.476022 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-11-01 13:21:08.476033 | orchestrator | Saturday 01 November 2025 13:20:39 +0000 (0:00:01.549) 0:00:37.192 ***** 2025-11-01 13:21:08.476044 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:21:08.476056 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:21:08.476067 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:21:08.476078 | orchestrator | 2025-11-01 13:21:08.476089 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-11-01 13:21:08.476100 | orchestrator | Saturday 01 November 2025 13:20:40 +0000 (0:00:01.346) 0:00:38.538 ***** 2025-11-01 13:21:08.476111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 13:21:08.476135 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:21:08.476156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 13:21:08.476167 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:21:08.476187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-11-01 13:21:08.476199 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:21:08.476210 | orchestrator | 2025-11-01 13:21:08.476221 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-11-01 13:21:08.476232 | orchestrator | Saturday 01 November 2025 13:20:41 +0000 (0:00:00.566) 0:00:39.105 ***** 2025-11-01 13:21:08.476243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 13:21:08.476255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 13:21:08.476286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-11-01 13:21:08.476298 | orchestrator | 2025-11-01 13:21:08.476309 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-11-01 13:21:08.476340 | orchestrator | Saturday 01 November 2025 13:20:42 +0000 (0:00:01.233) 0:00:40.339 ***** 2025-11-01 13:21:08.476352 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:21:08.476363 | orchestrator | 2025-11-01 13:21:08.476374 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-11-01 13:21:08.476385 | orchestrator | Saturday 01 November 2025 13:20:45 +0000 (0:00:02.884) 0:00:43.223 ***** 2025-11-01 13:21:08.476396 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:21:08.476406 | orchestrator | 2025-11-01 13:21:08.476417 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-11-01 13:21:08.476428 | orchestrator | Saturday 01 November 2025 13:20:47 +0000 (0:00:02.439) 0:00:45.663 ***** 2025-11-01 13:21:08.476439 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:21:08.476450 | orchestrator | 2025-11-01 13:21:08.476461 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-11-01 13:21:08.476472 | orchestrator | Saturday 01 November 2025 13:21:00 +0000 (0:00:12.425) 0:00:58.089 ***** 2025-11-01 13:21:08.476483 | orchestrator | 2025-11-01 13:21:08.476494 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-11-01 13:21:08.476505 | orchestrator | Saturday 01 November 2025 13:21:00 +0000 (0:00:00.074) 0:00:58.163 ***** 2025-11-01 13:21:08.476516 | orchestrator | 2025-11-01 13:21:08.476534 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-11-01 13:21:08.476545 | orchestrator | Saturday 01 November 2025 13:21:00 +0000 (0:00:00.070) 0:00:58.234 ***** 2025-11-01 13:21:08.476556 | orchestrator | 2025-11-01 13:21:08.476567 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-11-01 13:21:08.476578 | orchestrator | Saturday 01 November 2025 13:21:00 +0000 (0:00:00.089) 0:00:58.324 ***** 2025-11-01 13:21:08.476589 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:21:08.476600 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:21:08.476611 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:21:08.476622 | orchestrator | 2025-11-01 13:21:08.476633 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:21:08.476661 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 13:21:08.476675 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 13:21:08.476686 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 13:21:08.476697 | orchestrator | 2025-11-01 13:21:08.476708 | orchestrator | 2025-11-01 13:21:08.476719 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:21:08.476738 | orchestrator | Saturday 01 November 2025 13:21:05 +0000 (0:00:05.605) 0:01:03.930 ***** 2025-11-01 13:21:08.476749 | orchestrator | =============================================================================== 2025-11-01 13:21:08.476760 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.43s 2025-11-01 13:21:08.476771 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.29s 2025-11-01 13:21:08.476782 | orchestrator | placement : Restart placement-api container ----------------------------- 5.61s 2025-11-01 13:21:08.476793 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.84s 2025-11-01 13:21:08.476804 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.74s 2025-11-01 13:21:08.476814 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.57s 2025-11-01 13:21:08.476825 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.33s 2025-11-01 13:21:08.476836 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.23s 2025-11-01 13:21:08.476847 | orchestrator | placement : Creating placement databases -------------------------------- 2.88s 2025-11-01 13:21:08.476857 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.80s 2025-11-01 13:21:08.476868 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.44s 2025-11-01 13:21:08.476879 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.55s 2025-11-01 13:21:08.476890 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.44s 2025-11-01 13:21:08.476901 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.35s 2025-11-01 13:21:08.476912 | orchestrator | placement : Copying over config.json files for services ----------------- 1.31s 2025-11-01 13:21:08.476922 | orchestrator | placement : Check placement containers ---------------------------------- 1.23s 2025-11-01 13:21:08.476939 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.08s 2025-11-01 13:21:08.476950 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.93s 2025-11-01 13:21:08.476961 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.86s 2025-11-01 13:21:08.476972 | orchestrator | placement : include_tasks ----------------------------------------------- 0.61s 2025-11-01 13:21:08.476983 | orchestrator | 2025-11-01 13:21:08 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:21:08.476995 | orchestrator | 2025-11-01 13:21:08 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:21:08.477006 | orchestrator | 2025-11-01 13:21:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:21:11.557289 | orchestrator | 2025-11-01 13:21:11 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:21:11.559989 | orchestrator | 2025-11-01 13:21:11 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:21:11.561199 | orchestrator | 2025-11-01 13:21:11 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:21:11.562463 | orchestrator | 2025-11-01 13:21:11 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:21:11.562566 | orchestrator | 2025-11-01 13:21:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:21:14.605490 | orchestrator | 2025-11-01 13:21:14 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:21:14.607287 | orchestrator | 2025-11-01 13:21:14 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:21:14.608570 | orchestrator | 2025-11-01 13:21:14 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:21:14.610272 | orchestrator | 2025-11-01 13:21:14 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:21:14.610636 | orchestrator | 2025-11-01 13:21:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:21:17.654227 | orchestrator | 2025-11-01 13:21:17 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:21:17.656196 | orchestrator | 2025-11-01 13:21:17 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:21:17.657880 | orchestrator | 2025-11-01 13:21:17 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:21:17.660236 | orchestrator | 2025-11-01 13:21:17 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:21:17.660763 | orchestrator | 2025-11-01 13:21:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:21:20.709431 | orchestrator | 2025-11-01 13:21:20 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:21:20.710901 | orchestrator | 2025-11-01 13:21:20 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:21:20.710915 | orchestrator | 2025-11-01 13:21:20 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:21:20.710925 | orchestrator | 2025-11-01 13:21:20 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:21:20.710937 | orchestrator | 2025-11-01 13:21:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:21:23.748615 | orchestrator | 2025-11-01 13:21:23 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:21:23.749455 | orchestrator | 2025-11-01 13:21:23 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:21:23.750711 | orchestrator | 2025-11-01 13:21:23 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:21:23.752059 | orchestrator | 2025-11-01 13:21:23 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:21:23.752086 | orchestrator | 2025-11-01 13:21:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:21:26.798363 | orchestrator | 2025-11-01 13:21:26 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:21:26.799983 | orchestrator | 2025-11-01 13:21:26 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:21:26.800004 | orchestrator | 2025-11-01 13:21:26 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:21:26.800229 | orchestrator | 2025-11-01 13:21:26 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:21:26.801236 | orchestrator | 2025-11-01 13:21:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:21:29.839036 | orchestrator | 2025-11-01 13:21:29 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:21:29.930047 | orchestrator | 2025-11-01 13:21:29 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:21:29.930092 | orchestrator | 2025-11-01 13:21:29 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:21:29.930101 | orchestrator | 2025-11-01 13:21:29 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:21:29.930110 | orchestrator | 2025-11-01 13:21:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:21:32.883659 | orchestrator | 2025-11-01 13:21:32 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:21:32.885991 | orchestrator | 2025-11-01 13:21:32 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:21:32.888378 | orchestrator | 2025-11-01 13:21:32 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:21:32.890192 | orchestrator | 2025-11-01 13:21:32 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:21:32.890231 | orchestrator | 2025-11-01 13:21:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:21:35.940059 | orchestrator | 2025-11-01 13:21:35 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:21:35.940810 | orchestrator | 2025-11-01 13:21:35 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:21:35.941749 | orchestrator | 2025-11-01 13:21:35 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:21:35.942734 | orchestrator | 2025-11-01 13:21:35 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:21:35.942765 | orchestrator | 2025-11-01 13:21:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:21:38.983626 | orchestrator | 2025-11-01 13:21:38 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:21:38.983711 | orchestrator | 2025-11-01 13:21:38 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:21:38.984048 | orchestrator | 2025-11-01 13:21:38 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:21:38.985028 | orchestrator | 2025-11-01 13:21:38 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:21:38.985060 | orchestrator | 2025-11-01 13:21:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:21:42.026513 | orchestrator | 2025-11-01 13:21:42 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:21:42.027229 | orchestrator | 2025-11-01 13:21:42 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:21:42.028106 | orchestrator | 2025-11-01 13:21:42 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:21:42.029179 | orchestrator | 2025-11-01 13:21:42 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:21:42.029213 | orchestrator | 2025-11-01 13:21:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:21:45.070800 | orchestrator | 2025-11-01 13:21:45 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:21:45.072467 | orchestrator | 2025-11-01 13:21:45 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:21:45.073296 | orchestrator | 2025-11-01 13:21:45 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:21:45.075227 | orchestrator | 2025-11-01 13:21:45 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:21:45.075264 | orchestrator | 2025-11-01 13:21:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:21:48.115162 | orchestrator | 2025-11-01 13:21:48 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:21:48.116371 | orchestrator | 2025-11-01 13:21:48 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:21:48.117636 | orchestrator | 2025-11-01 13:21:48 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:21:48.119367 | orchestrator | 2025-11-01 13:21:48 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:21:48.119388 | orchestrator | 2025-11-01 13:21:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:21:51.148422 | orchestrator | 2025-11-01 13:21:51 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:21:51.149634 | orchestrator | 2025-11-01 13:21:51 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:21:51.150277 | orchestrator | 2025-11-01 13:21:51 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:21:51.151463 | orchestrator | 2025-11-01 13:21:51 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:21:51.151496 | orchestrator | 2025-11-01 13:21:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:21:54.185725 | orchestrator | 2025-11-01 13:21:54 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:21:54.186907 | orchestrator | 2025-11-01 13:21:54 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:21:54.187859 | orchestrator | 2025-11-01 13:21:54 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:21:54.189155 | orchestrator | 2025-11-01 13:21:54 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:21:54.189177 | orchestrator | 2025-11-01 13:21:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:21:57.232941 | orchestrator | 2025-11-01 13:21:57 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:21:57.234174 | orchestrator | 2025-11-01 13:21:57 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:21:57.235772 | orchestrator | 2025-11-01 13:21:57 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:21:57.237896 | orchestrator | 2025-11-01 13:21:57 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:21:57.237917 | orchestrator | 2025-11-01 13:21:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:22:00.286307 | orchestrator | 2025-11-01 13:22:00 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:22:00.288265 | orchestrator | 2025-11-01 13:22:00 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:22:00.289762 | orchestrator | 2025-11-01 13:22:00 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:22:00.291494 | orchestrator | 2025-11-01 13:22:00 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:22:00.291530 | orchestrator | 2025-11-01 13:22:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:22:03.331569 | orchestrator | 2025-11-01 13:22:03 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:22:03.331992 | orchestrator | 2025-11-01 13:22:03 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:22:03.332920 | orchestrator | 2025-11-01 13:22:03 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:22:03.334256 | orchestrator | 2025-11-01 13:22:03 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:22:03.334265 | orchestrator | 2025-11-01 13:22:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:22:06.377979 | orchestrator | 2025-11-01 13:22:06 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:22:06.380270 | orchestrator | 2025-11-01 13:22:06 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state STARTED 2025-11-01 13:22:06.382266 | orchestrator | 2025-11-01 13:22:06 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:22:06.383948 | orchestrator | 2025-11-01 13:22:06 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:22:06.384122 | orchestrator | 2025-11-01 13:22:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:22:09.423187 | orchestrator | 2025-11-01 13:22:09 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:22:09.426495 | orchestrator | 2025-11-01 13:22:09 | INFO  | Task a39ab462-6f7e-4b9c-9d4c-0525e598d270 is in state SUCCESS 2025-11-01 13:22:09.427801 | orchestrator | 2025-11-01 13:22:09.428127 | orchestrator | 2025-11-01 13:22:09.428144 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:22:09.428157 | orchestrator | 2025-11-01 13:22:09.428168 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:22:09.428180 | orchestrator | Saturday 01 November 2025 13:20:04 +0000 (0:00:00.261) 0:00:00.261 ***** 2025-11-01 13:22:09.428191 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:22:09.428204 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:22:09.428215 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:22:09.428226 | orchestrator | 2025-11-01 13:22:09.428237 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:22:09.428261 | orchestrator | Saturday 01 November 2025 13:20:04 +0000 (0:00:00.333) 0:00:00.595 ***** 2025-11-01 13:22:09.428272 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-11-01 13:22:09.428284 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-11-01 13:22:09.428295 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-11-01 13:22:09.428306 | orchestrator | 2025-11-01 13:22:09.428317 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-11-01 13:22:09.428351 | orchestrator | 2025-11-01 13:22:09.428362 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-11-01 13:22:09.428373 | orchestrator | Saturday 01 November 2025 13:20:05 +0000 (0:00:00.393) 0:00:00.989 ***** 2025-11-01 13:22:09.428384 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:22:09.428415 | orchestrator | 2025-11-01 13:22:09.428426 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-11-01 13:22:09.428437 | orchestrator | Saturday 01 November 2025 13:20:05 +0000 (0:00:00.518) 0:00:01.507 ***** 2025-11-01 13:22:09.428460 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-11-01 13:22:09.428472 | orchestrator | 2025-11-01 13:22:09.428482 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-11-01 13:22:09.428493 | orchestrator | Saturday 01 November 2025 13:20:08 +0000 (0:00:03.076) 0:00:04.584 ***** 2025-11-01 13:22:09.428504 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-11-01 13:22:09.428516 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-11-01 13:22:09.428527 | orchestrator | 2025-11-01 13:22:09.428537 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-11-01 13:22:09.428548 | orchestrator | Saturday 01 November 2025 13:20:14 +0000 (0:00:05.996) 0:00:10.581 ***** 2025-11-01 13:22:09.428559 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-01 13:22:09.428570 | orchestrator | 2025-11-01 13:22:09.428650 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-11-01 13:22:09.428662 | orchestrator | Saturday 01 November 2025 13:20:17 +0000 (0:00:03.239) 0:00:13.820 ***** 2025-11-01 13:22:09.428673 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-01 13:22:09.428684 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-11-01 13:22:09.428695 | orchestrator | 2025-11-01 13:22:09.428706 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-11-01 13:22:09.428717 | orchestrator | Saturday 01 November 2025 13:20:21 +0000 (0:00:03.882) 0:00:17.702 ***** 2025-11-01 13:22:09.428728 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-01 13:22:09.428761 | orchestrator | 2025-11-01 13:22:09.428772 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-11-01 13:22:09.428783 | orchestrator | Saturday 01 November 2025 13:20:25 +0000 (0:00:03.309) 0:00:21.012 ***** 2025-11-01 13:22:09.428794 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-11-01 13:22:09.428804 | orchestrator | 2025-11-01 13:22:09.428815 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-11-01 13:22:09.428825 | orchestrator | Saturday 01 November 2025 13:20:28 +0000 (0:00:03.907) 0:00:24.919 ***** 2025-11-01 13:22:09.428836 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:09.428847 | orchestrator | 2025-11-01 13:22:09.428858 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-11-01 13:22:09.428869 | orchestrator | Saturday 01 November 2025 13:20:32 +0000 (0:00:03.290) 0:00:28.210 ***** 2025-11-01 13:22:09.428879 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:09.428890 | orchestrator | 2025-11-01 13:22:09.428901 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-11-01 13:22:09.428911 | orchestrator | Saturday 01 November 2025 13:20:36 +0000 (0:00:04.256) 0:00:32.466 ***** 2025-11-01 13:22:09.428922 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:09.428933 | orchestrator | 2025-11-01 13:22:09.428944 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-11-01 13:22:09.428955 | orchestrator | Saturday 01 November 2025 13:20:39 +0000 (0:00:03.353) 0:00:35.820 ***** 2025-11-01 13:22:09.428999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:22:09.429023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:22:09.429036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:22:09.429057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:09.429070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:09.429091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:09.429103 | orchestrator | 2025-11-01 13:22:09.429115 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-11-01 13:22:09.429126 | orchestrator | Saturday 01 November 2025 13:20:41 +0000 (0:00:01.427) 0:00:37.247 ***** 2025-11-01 13:22:09.429137 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:09.429148 | orchestrator | 2025-11-01 13:22:09.429159 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-11-01 13:22:09.429169 | orchestrator | Saturday 01 November 2025 13:20:41 +0000 (0:00:00.226) 0:00:37.473 ***** 2025-11-01 13:22:09.429185 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:09.429196 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:09.429207 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:09.429217 | orchestrator | 2025-11-01 13:22:09.429228 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-11-01 13:22:09.429239 | orchestrator | Saturday 01 November 2025 13:20:42 +0000 (0:00:00.673) 0:00:38.147 ***** 2025-11-01 13:22:09.429250 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 13:22:09.429261 | orchestrator | 2025-11-01 13:22:09.429272 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-11-01 13:22:09.429285 | orchestrator | Saturday 01 November 2025 13:20:43 +0000 (0:00:01.164) 0:00:39.311 ***** 2025-11-01 13:22:09.429299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:22:09.429320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:22:09.429368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:22:09.429390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:09.429405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:09.429425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:09.429437 | orchestrator | 2025-11-01 13:22:09.429450 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-11-01 13:22:09.429463 | orchestrator | Saturday 01 November 2025 13:20:46 +0000 (0:00:02.771) 0:00:42.082 ***** 2025-11-01 13:22:09.429476 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:22:09.429488 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:22:09.429500 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:22:09.429512 | orchestrator | 2025-11-01 13:22:09.429524 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-11-01 13:22:09.429537 | orchestrator | Saturday 01 November 2025 13:20:46 +0000 (0:00:00.368) 0:00:42.451 ***** 2025-11-01 13:22:09.429550 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:22:09.429562 | orchestrator | 2025-11-01 13:22:09.429574 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-11-01 13:22:09.429586 | orchestrator | Saturday 01 November 2025 13:20:47 +0000 (0:00:00.898) 0:00:43.349 ***** 2025-11-01 13:22:09.429630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:22:09.429649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:22:09.429666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:22:09.429685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:09.429697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:09.429709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:09.429720 | orchestrator | 2025-11-01 13:22:09.429731 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-11-01 13:22:09.429742 | orchestrator | Saturday 01 November 2025 13:20:50 +0000 (0:00:02.664) 0:00:46.014 ***** 2025-11-01 13:22:09.429761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 13:22:09.429778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:22:09.429809 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:09.429821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 13:22:09.429833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 13:22:09.429845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:22:09.429856 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:09.429874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:22:09.429895 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:09.429906 | orchestrator | 2025-11-01 13:22:09.429917 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-11-01 13:22:09.429928 | orchestrator | Saturday 01 November 2025 13:20:50 +0000 (0:00:00.835) 0:00:46.849 ***** 2025-11-01 13:22:09.429943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 13:22:09.429955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:22:09.429966 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:09.429978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 13:22:09.429990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:22:09.430001 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:09.430099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 13:22:09.430234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:22:09.430250 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:09.430261 | orchestrator | 2025-11-01 13:22:09.430272 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-11-01 13:22:09.430283 | orchestrator | Saturday 01 November 2025 13:20:52 +0000 (0:00:01.435) 0:00:48.284 ***** 2025-11-01 13:22:09.430294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:22:09.430423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:22:09.430445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:22:09.430471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:09.430482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:09.430493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:09.430503 | orchestrator | 2025-11-01 13:22:09.430513 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-11-01 13:22:09.430523 | orchestrator | Saturday 01 November 2025 13:20:55 +0000 (0:00:02.684) 0:00:50.968 ***** 2025-11-01 13:22:09.430533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:22:09.430549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:22:09.430570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:22:09.430581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:09.430591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:09.430601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:09.430611 | orchestrator | 2025-11-01 13:22:09.430626 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-11-01 13:22:09.430636 | orchestrator | Saturday 01 November 2025 13:21:00 +0000 (0:00:05.247) 0:00:56.216 ***** 2025-11-01 13:22:09.430652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 13:22:09.430666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:22:09.430677 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:09.430687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 13:22:09.430698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:22:09.430708 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:09.430718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-11-01 13:22:09.430742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:22:09.430753 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:09.430762 | orchestrator | 2025-11-01 13:22:09.430772 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-11-01 13:22:09.430782 | orchestrator | Saturday 01 November 2025 13:21:01 +0000 (0:00:00.885) 0:00:57.101 ***** 2025-11-01 13:22:09.430797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:22:09.430808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:22:09.430818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-11-01 13:22:09.430835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:09.430856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:09.430867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:09.430877 | orchestrator | 2025-11-01 13:22:09.430887 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-11-01 13:22:09.430897 | orchestrator | Saturday 01 November 2025 13:21:03 +0000 (0:00:02.666) 0:00:59.768 ***** 2025-11-01 13:22:09.430907 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:09.430916 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:09.430926 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:09.430935 | orchestrator | 2025-11-01 13:22:09.430945 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-11-01 13:22:09.430955 | orchestrator | Saturday 01 November 2025 13:21:04 +0000 (0:00:00.338) 0:01:00.107 ***** 2025-11-01 13:22:09.430964 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:09.430974 | orchestrator | 2025-11-01 13:22:09.430983 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-11-01 13:22:09.430993 | orchestrator | Saturday 01 November 2025 13:21:06 +0000 (0:00:02.119) 0:01:02.227 ***** 2025-11-01 13:22:09.431003 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:09.431012 | orchestrator | 2025-11-01 13:22:09.431022 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-11-01 13:22:09.431032 | orchestrator | Saturday 01 November 2025 13:21:08 +0000 (0:00:02.702) 0:01:04.930 ***** 2025-11-01 13:22:09.431041 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:09.431052 | orchestrator | 2025-11-01 13:22:09.431063 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-11-01 13:22:09.431084 | orchestrator | Saturday 01 November 2025 13:21:28 +0000 (0:00:19.338) 0:01:24.268 ***** 2025-11-01 13:22:09.431095 | orchestrator | 2025-11-01 13:22:09.431106 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-11-01 13:22:09.431117 | orchestrator | Saturday 01 November 2025 13:21:28 +0000 (0:00:00.080) 0:01:24.349 ***** 2025-11-01 13:22:09.431129 | orchestrator | 2025-11-01 13:22:09.431139 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-11-01 13:22:09.431148 | orchestrator | Saturday 01 November 2025 13:21:28 +0000 (0:00:00.092) 0:01:24.441 ***** 2025-11-01 13:22:09.431158 | orchestrator | 2025-11-01 13:22:09.431167 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-11-01 13:22:09.431177 | orchestrator | Saturday 01 November 2025 13:21:28 +0000 (0:00:00.072) 0:01:24.514 ***** 2025-11-01 13:22:09.431187 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:09.431196 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:22:09.431206 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:22:09.431215 | orchestrator | 2025-11-01 13:22:09.431225 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-11-01 13:22:09.431235 | orchestrator | Saturday 01 November 2025 13:21:48 +0000 (0:00:20.323) 0:01:44.837 ***** 2025-11-01 13:22:09.431244 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:09.431254 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:22:09.431264 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:22:09.431273 | orchestrator | 2025-11-01 13:22:09.431283 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:22:09.431293 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-11-01 13:22:09.431304 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 13:22:09.431313 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 13:22:09.431323 | orchestrator | 2025-11-01 13:22:09.431349 | orchestrator | 2025-11-01 13:22:09.431359 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:22:09.431369 | orchestrator | Saturday 01 November 2025 13:22:08 +0000 (0:00:20.053) 0:02:04.891 ***** 2025-11-01 13:22:09.431378 | orchestrator | =============================================================================== 2025-11-01 13:22:09.431388 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 20.32s 2025-11-01 13:22:09.431403 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 20.05s 2025-11-01 13:22:09.431413 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 19.34s 2025-11-01 13:22:09.431423 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.00s 2025-11-01 13:22:09.431433 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.25s 2025-11-01 13:22:09.431442 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.26s 2025-11-01 13:22:09.431452 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.91s 2025-11-01 13:22:09.431466 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.88s 2025-11-01 13:22:09.431476 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.35s 2025-11-01 13:22:09.431486 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.31s 2025-11-01 13:22:09.431495 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.29s 2025-11-01 13:22:09.431505 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.24s 2025-11-01 13:22:09.431515 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.08s 2025-11-01 13:22:09.431530 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.77s 2025-11-01 13:22:09.431540 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.70s 2025-11-01 13:22:09.431549 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.68s 2025-11-01 13:22:09.431559 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.67s 2025-11-01 13:22:09.431568 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.66s 2025-11-01 13:22:09.431578 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.12s 2025-11-01 13:22:09.431587 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 1.44s 2025-11-01 13:22:09.431597 | orchestrator | 2025-11-01 13:22:09 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:22:09.431607 | orchestrator | 2025-11-01 13:22:09 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:22:09.431617 | orchestrator | 2025-11-01 13:22:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:22:12.473400 | orchestrator | 2025-11-01 13:22:12 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:22:12.475518 | orchestrator | 2025-11-01 13:22:12 | INFO  | Task 5caaa995-1f4d-4156-a976-0ec5a61aeaec is in state STARTED 2025-11-01 13:22:12.476399 | orchestrator | 2025-11-01 13:22:12 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:22:12.479683 | orchestrator | 2025-11-01 13:22:12 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:22:12.479706 | orchestrator | 2025-11-01 13:22:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:22:15.515923 | orchestrator | 2025-11-01 13:22:15 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:22:15.516750 | orchestrator | 2025-11-01 13:22:15 | INFO  | Task 5caaa995-1f4d-4156-a976-0ec5a61aeaec is in state STARTED 2025-11-01 13:22:15.518446 | orchestrator | 2025-11-01 13:22:15 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:22:15.520101 | orchestrator | 2025-11-01 13:22:15 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:22:15.520407 | orchestrator | 2025-11-01 13:22:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:22:18.559410 | orchestrator | 2025-11-01 13:22:18 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:22:18.560419 | orchestrator | 2025-11-01 13:22:18 | INFO  | Task 5caaa995-1f4d-4156-a976-0ec5a61aeaec is in state SUCCESS 2025-11-01 13:22:18.561853 | orchestrator | 2025-11-01 13:22:18 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:22:18.563972 | orchestrator | 2025-11-01 13:22:18 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:22:18.564702 | orchestrator | 2025-11-01 13:22:18 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:22:18.564902 | orchestrator | 2025-11-01 13:22:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:22:21.604155 | orchestrator | 2025-11-01 13:22:21 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:22:21.605813 | orchestrator | 2025-11-01 13:22:21 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:22:21.607768 | orchestrator | 2025-11-01 13:22:21 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:22:21.609923 | orchestrator | 2025-11-01 13:22:21 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:22:21.609967 | orchestrator | 2025-11-01 13:22:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:22:24.658263 | orchestrator | 2025-11-01 13:22:24 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:22:24.658705 | orchestrator | 2025-11-01 13:22:24 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:22:24.662051 | orchestrator | 2025-11-01 13:22:24 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:22:24.663292 | orchestrator | 2025-11-01 13:22:24 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:22:24.663313 | orchestrator | 2025-11-01 13:22:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:22:27.700116 | orchestrator | 2025-11-01 13:22:27 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:22:27.700946 | orchestrator | 2025-11-01 13:22:27 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:22:27.702474 | orchestrator | 2025-11-01 13:22:27 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:22:27.703931 | orchestrator | 2025-11-01 13:22:27 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:22:27.704003 | orchestrator | 2025-11-01 13:22:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:22:30.738148 | orchestrator | 2025-11-01 13:22:30 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:22:30.739410 | orchestrator | 2025-11-01 13:22:30 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:22:30.740856 | orchestrator | 2025-11-01 13:22:30 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:22:30.743386 | orchestrator | 2025-11-01 13:22:30 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:22:30.743674 | orchestrator | 2025-11-01 13:22:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:22:33.785695 | orchestrator | 2025-11-01 13:22:33 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:22:33.786448 | orchestrator | 2025-11-01 13:22:33 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:22:33.789606 | orchestrator | 2025-11-01 13:22:33 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:22:33.792420 | orchestrator | 2025-11-01 13:22:33 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:22:33.793248 | orchestrator | 2025-11-01 13:22:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:22:36.835831 | orchestrator | 2025-11-01 13:22:36 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:22:36.837483 | orchestrator | 2025-11-01 13:22:36 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:22:36.839016 | orchestrator | 2025-11-01 13:22:36 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:22:36.840568 | orchestrator | 2025-11-01 13:22:36 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state STARTED 2025-11-01 13:22:36.840780 | orchestrator | 2025-11-01 13:22:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:22:39.878076 | orchestrator | 2025-11-01 13:22:39 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:22:39.880085 | orchestrator | 2025-11-01 13:22:39 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:22:39.883976 | orchestrator | 2025-11-01 13:22:39 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:22:39.888904 | orchestrator | 2025-11-01 13:22:39 | INFO  | Task 2bc6b83c-2478-4fda-9fac-d1c83834a374 is in state SUCCESS 2025-11-01 13:22:39.891388 | orchestrator | 2025-11-01 13:22:39.891436 | orchestrator | 2025-11-01 13:22:39.891449 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:22:39.891461 | orchestrator | 2025-11-01 13:22:39.891585 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:22:39.891601 | orchestrator | Saturday 01 November 2025 13:22:14 +0000 (0:00:00.225) 0:00:00.225 ***** 2025-11-01 13:22:39.891612 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:22:39.891659 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:22:39.891672 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:22:39.891683 | orchestrator | 2025-11-01 13:22:39.892268 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:22:39.892287 | orchestrator | Saturday 01 November 2025 13:22:14 +0000 (0:00:00.345) 0:00:00.570 ***** 2025-11-01 13:22:39.892298 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-11-01 13:22:39.892457 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-11-01 13:22:39.892799 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-11-01 13:22:39.892813 | orchestrator | 2025-11-01 13:22:39.892825 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-11-01 13:22:39.892837 | orchestrator | 2025-11-01 13:22:39.892848 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-11-01 13:22:39.892859 | orchestrator | Saturday 01 November 2025 13:22:15 +0000 (0:00:00.920) 0:00:01.491 ***** 2025-11-01 13:22:39.892870 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:22:39.892881 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:22:39.892892 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:22:39.892903 | orchestrator | 2025-11-01 13:22:39.892929 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:22:39.892941 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:22:39.892982 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:22:39.893068 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:22:39.893083 | orchestrator | 2025-11-01 13:22:39.893094 | orchestrator | 2025-11-01 13:22:39.893105 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:22:39.893117 | orchestrator | Saturday 01 November 2025 13:22:16 +0000 (0:00:00.751) 0:00:02.242 ***** 2025-11-01 13:22:39.893127 | orchestrator | =============================================================================== 2025-11-01 13:22:39.893138 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.92s 2025-11-01 13:22:39.893149 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.75s 2025-11-01 13:22:39.893160 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-11-01 13:22:39.893171 | orchestrator | 2025-11-01 13:22:39.893182 | orchestrator | 2025-11-01 13:22:39.893193 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:22:39.893204 | orchestrator | 2025-11-01 13:22:39.893215 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-11-01 13:22:39.893226 | orchestrator | Saturday 01 November 2025 13:11:41 +0000 (0:00:00.478) 0:00:00.478 ***** 2025-11-01 13:22:39.893237 | orchestrator | changed: [testbed-manager] 2025-11-01 13:22:39.893249 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:39.893260 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:22:39.893271 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:22:39.893282 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:22:39.893293 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:22:39.893822 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:22:39.893836 | orchestrator | 2025-11-01 13:22:39.893847 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:22:39.893858 | orchestrator | Saturday 01 November 2025 13:11:43 +0000 (0:00:01.282) 0:00:01.760 ***** 2025-11-01 13:22:39.893868 | orchestrator | changed: [testbed-manager] 2025-11-01 13:22:39.893879 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:39.893890 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:22:39.893901 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:22:39.893912 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:22:39.893923 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:22:39.893934 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:22:39.893944 | orchestrator | 2025-11-01 13:22:39.893955 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:22:39.893966 | orchestrator | Saturday 01 November 2025 13:11:44 +0000 (0:00:00.843) 0:00:02.603 ***** 2025-11-01 13:22:39.893977 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-11-01 13:22:39.893988 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-11-01 13:22:39.893999 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-11-01 13:22:39.894010 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-11-01 13:22:39.894058 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-11-01 13:22:39.894069 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-11-01 13:22:39.894080 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-11-01 13:22:39.894091 | orchestrator | 2025-11-01 13:22:39.894102 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-11-01 13:22:39.894112 | orchestrator | 2025-11-01 13:22:39.894123 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-11-01 13:22:39.894134 | orchestrator | Saturday 01 November 2025 13:11:45 +0000 (0:00:01.111) 0:00:03.715 ***** 2025-11-01 13:22:39.894145 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:22:39.894156 | orchestrator | 2025-11-01 13:22:39.894166 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-11-01 13:22:39.894177 | orchestrator | Saturday 01 November 2025 13:11:46 +0000 (0:00:00.933) 0:00:04.648 ***** 2025-11-01 13:22:39.894188 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-11-01 13:22:39.894287 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-11-01 13:22:39.894303 | orchestrator | 2025-11-01 13:22:39.894314 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-11-01 13:22:39.894325 | orchestrator | Saturday 01 November 2025 13:11:50 +0000 (0:00:04.678) 0:00:09.327 ***** 2025-11-01 13:22:39.894387 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-01 13:22:39.894399 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-11-01 13:22:39.894409 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:39.894420 | orchestrator | 2025-11-01 13:22:39.894431 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-11-01 13:22:39.894442 | orchestrator | Saturday 01 November 2025 13:11:55 +0000 (0:00:04.912) 0:00:14.239 ***** 2025-11-01 13:22:39.894453 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:39.894464 | orchestrator | 2025-11-01 13:22:39.894475 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-11-01 13:22:39.894486 | orchestrator | Saturday 01 November 2025 13:11:56 +0000 (0:00:00.957) 0:00:15.197 ***** 2025-11-01 13:22:39.894497 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:39.894507 | orchestrator | 2025-11-01 13:22:39.894518 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-11-01 13:22:39.894529 | orchestrator | Saturday 01 November 2025 13:11:59 +0000 (0:00:02.647) 0:00:17.844 ***** 2025-11-01 13:22:39.894540 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:39.894550 | orchestrator | 2025-11-01 13:22:39.894561 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-11-01 13:22:39.894592 | orchestrator | Saturday 01 November 2025 13:12:02 +0000 (0:00:03.532) 0:00:21.378 ***** 2025-11-01 13:22:39.894603 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.894614 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.894625 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.894636 | orchestrator | 2025-11-01 13:22:39.894647 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-11-01 13:22:39.894658 | orchestrator | Saturday 01 November 2025 13:12:03 +0000 (0:00:00.711) 0:00:22.090 ***** 2025-11-01 13:22:39.894669 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:22:39.894680 | orchestrator | 2025-11-01 13:22:39.894690 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-11-01 13:22:39.894701 | orchestrator | Saturday 01 November 2025 13:12:39 +0000 (0:00:35.448) 0:00:57.538 ***** 2025-11-01 13:22:39.894712 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:39.894723 | orchestrator | 2025-11-01 13:22:39.894734 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-11-01 13:22:39.894745 | orchestrator | Saturday 01 November 2025 13:12:57 +0000 (0:00:18.477) 0:01:16.016 ***** 2025-11-01 13:22:39.894755 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:22:39.894766 | orchestrator | 2025-11-01 13:22:39.894777 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-11-01 13:22:39.894788 | orchestrator | Saturday 01 November 2025 13:13:13 +0000 (0:00:16.434) 0:01:32.451 ***** 2025-11-01 13:22:39.894799 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:22:39.894810 | orchestrator | 2025-11-01 13:22:39.894821 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-11-01 13:22:39.894831 | orchestrator | Saturday 01 November 2025 13:13:15 +0000 (0:00:01.477) 0:01:33.929 ***** 2025-11-01 13:22:39.894842 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.894853 | orchestrator | 2025-11-01 13:22:39.894864 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-11-01 13:22:39.894874 | orchestrator | Saturday 01 November 2025 13:13:16 +0000 (0:00:00.623) 0:01:34.552 ***** 2025-11-01 13:22:39.894886 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:22:39.894897 | orchestrator | 2025-11-01 13:22:39.894910 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-11-01 13:22:39.894922 | orchestrator | Saturday 01 November 2025 13:13:18 +0000 (0:00:02.209) 0:01:36.762 ***** 2025-11-01 13:22:39.894934 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:22:39.894946 | orchestrator | 2025-11-01 13:22:39.894957 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-11-01 13:22:39.894969 | orchestrator | Saturday 01 November 2025 13:13:41 +0000 (0:00:22.990) 0:01:59.753 ***** 2025-11-01 13:22:39.894982 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.894994 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.895006 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.895017 | orchestrator | 2025-11-01 13:22:39.895029 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-11-01 13:22:39.895040 | orchestrator | 2025-11-01 13:22:39.895051 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-11-01 13:22:39.895062 | orchestrator | Saturday 01 November 2025 13:13:41 +0000 (0:00:00.404) 0:02:00.157 ***** 2025-11-01 13:22:39.895073 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:22:39.895083 | orchestrator | 2025-11-01 13:22:39.895094 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-11-01 13:22:39.895105 | orchestrator | Saturday 01 November 2025 13:13:42 +0000 (0:00:00.594) 0:02:00.752 ***** 2025-11-01 13:22:39.895115 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.895126 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.895137 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:39.895148 | orchestrator | 2025-11-01 13:22:39.895166 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-11-01 13:22:39.895177 | orchestrator | Saturday 01 November 2025 13:13:44 +0000 (0:00:02.580) 0:02:03.333 ***** 2025-11-01 13:22:39.895188 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.895198 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.895209 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:39.895220 | orchestrator | 2025-11-01 13:22:39.895231 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-11-01 13:22:39.895242 | orchestrator | Saturday 01 November 2025 13:13:47 +0000 (0:00:02.599) 0:02:05.933 ***** 2025-11-01 13:22:39.895252 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.895263 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.895363 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.895380 | orchestrator | 2025-11-01 13:22:39.895391 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-11-01 13:22:39.895402 | orchestrator | Saturday 01 November 2025 13:13:47 +0000 (0:00:00.370) 0:02:06.303 ***** 2025-11-01 13:22:39.895412 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-11-01 13:22:39.895423 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.895434 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-11-01 13:22:39.895444 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.895455 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-11-01 13:22:39.895466 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-11-01 13:22:39.895477 | orchestrator | 2025-11-01 13:22:39.895488 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-11-01 13:22:39.895498 | orchestrator | Saturday 01 November 2025 13:13:58 +0000 (0:00:11.190) 0:02:17.494 ***** 2025-11-01 13:22:39.895509 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.895520 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.895531 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.895541 | orchestrator | 2025-11-01 13:22:39.895552 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-11-01 13:22:39.895563 | orchestrator | Saturday 01 November 2025 13:14:00 +0000 (0:00:01.111) 0:02:18.605 ***** 2025-11-01 13:22:39.895573 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-11-01 13:22:39.895584 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-11-01 13:22:39.895594 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.895611 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.895622 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-11-01 13:22:39.895633 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.895644 | orchestrator | 2025-11-01 13:22:39.895654 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-11-01 13:22:39.895665 | orchestrator | Saturday 01 November 2025 13:14:01 +0000 (0:00:01.733) 0:02:20.339 ***** 2025-11-01 13:22:39.895676 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.895687 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.895697 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:39.895708 | orchestrator | 2025-11-01 13:22:39.895718 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-11-01 13:22:39.895729 | orchestrator | Saturday 01 November 2025 13:14:03 +0000 (0:00:01.644) 0:02:21.983 ***** 2025-11-01 13:22:39.895740 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.895750 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.895761 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:39.895772 | orchestrator | 2025-11-01 13:22:39.895782 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-11-01 13:22:39.895793 | orchestrator | Saturday 01 November 2025 13:14:04 +0000 (0:00:01.409) 0:02:23.392 ***** 2025-11-01 13:22:39.895804 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.895814 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.895825 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:39.895865 | orchestrator | 2025-11-01 13:22:39.895877 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-11-01 13:22:39.895888 | orchestrator | Saturday 01 November 2025 13:14:08 +0000 (0:00:03.951) 0:02:27.344 ***** 2025-11-01 13:22:39.895899 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.895909 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.895920 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:22:39.895931 | orchestrator | 2025-11-01 13:22:39.895942 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-11-01 13:22:39.895953 | orchestrator | Saturday 01 November 2025 13:14:33 +0000 (0:00:24.237) 0:02:51.582 ***** 2025-11-01 13:22:39.895963 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.895975 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.895987 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:22:39.895999 | orchestrator | 2025-11-01 13:22:39.896011 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-11-01 13:22:39.896023 | orchestrator | Saturday 01 November 2025 13:14:49 +0000 (0:00:16.388) 0:03:07.970 ***** 2025-11-01 13:22:39.896035 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:22:39.896047 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.896059 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.896070 | orchestrator | 2025-11-01 13:22:39.896082 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-11-01 13:22:39.896094 | orchestrator | Saturday 01 November 2025 13:14:50 +0000 (0:00:01.512) 0:03:09.483 ***** 2025-11-01 13:22:39.896106 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.896118 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.896130 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:39.896142 | orchestrator | 2025-11-01 13:22:39.896155 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-11-01 13:22:39.896168 | orchestrator | Saturday 01 November 2025 13:15:06 +0000 (0:00:15.140) 0:03:24.624 ***** 2025-11-01 13:22:39.896180 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.896193 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.896204 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.896215 | orchestrator | 2025-11-01 13:22:39.896226 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-11-01 13:22:39.896237 | orchestrator | Saturday 01 November 2025 13:15:07 +0000 (0:00:01.179) 0:03:25.804 ***** 2025-11-01 13:22:39.896248 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.896259 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.896269 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.896280 | orchestrator | 2025-11-01 13:22:39.896291 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-11-01 13:22:39.896301 | orchestrator | 2025-11-01 13:22:39.896312 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-11-01 13:22:39.896323 | orchestrator | Saturday 01 November 2025 13:15:07 +0000 (0:00:00.586) 0:03:26.391 ***** 2025-11-01 13:22:39.896349 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:22:39.896361 | orchestrator | 2025-11-01 13:22:39.896449 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-11-01 13:22:39.896464 | orchestrator | Saturday 01 November 2025 13:15:08 +0000 (0:00:00.638) 0:03:27.029 ***** 2025-11-01 13:22:39.896475 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-11-01 13:22:39.896486 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-11-01 13:22:39.896497 | orchestrator | 2025-11-01 13:22:39.896507 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-11-01 13:22:39.896518 | orchestrator | Saturday 01 November 2025 13:15:12 +0000 (0:00:04.087) 0:03:31.117 ***** 2025-11-01 13:22:39.896529 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-11-01 13:22:39.896540 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-11-01 13:22:39.896559 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-11-01 13:22:39.896570 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-11-01 13:22:39.896581 | orchestrator | 2025-11-01 13:22:39.896592 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-11-01 13:22:39.896602 | orchestrator | Saturday 01 November 2025 13:15:20 +0000 (0:00:07.828) 0:03:38.945 ***** 2025-11-01 13:22:39.896619 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-01 13:22:39.896630 | orchestrator | 2025-11-01 13:22:39.896640 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-11-01 13:22:39.896651 | orchestrator | Saturday 01 November 2025 13:15:23 +0000 (0:00:03.501) 0:03:42.447 ***** 2025-11-01 13:22:39.896662 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-01 13:22:39.896672 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-11-01 13:22:39.896683 | orchestrator | 2025-11-01 13:22:39.896694 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-11-01 13:22:39.896704 | orchestrator | Saturday 01 November 2025 13:15:28 +0000 (0:00:04.443) 0:03:46.890 ***** 2025-11-01 13:22:39.896715 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-01 13:22:39.896726 | orchestrator | 2025-11-01 13:22:39.896737 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-11-01 13:22:39.896747 | orchestrator | Saturday 01 November 2025 13:15:33 +0000 (0:00:04.706) 0:03:51.597 ***** 2025-11-01 13:22:39.896758 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-11-01 13:22:39.896768 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-11-01 13:22:39.896779 | orchestrator | 2025-11-01 13:22:39.896790 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-11-01 13:22:39.896800 | orchestrator | Saturday 01 November 2025 13:15:42 +0000 (0:00:08.928) 0:04:00.526 ***** 2025-11-01 13:22:39.896816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 13:22:39.896927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 13:22:39.896956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.896984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 13:22:39.896997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.897010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.897021 | orchestrator | 2025-11-01 13:22:39.897032 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-11-01 13:22:39.897043 | orchestrator | Saturday 01 November 2025 13:15:44 +0000 (0:00:02.316) 0:04:02.842 ***** 2025-11-01 13:22:39.897054 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.897065 | orchestrator | 2025-11-01 13:22:39.897076 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-11-01 13:22:39.897093 | orchestrator | Saturday 01 November 2025 13:15:44 +0000 (0:00:00.180) 0:04:03.023 ***** 2025-11-01 13:22:39.897104 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.897115 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.897125 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.897136 | orchestrator | 2025-11-01 13:22:39.897147 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-11-01 13:22:39.897158 | orchestrator | Saturday 01 November 2025 13:15:44 +0000 (0:00:00.338) 0:04:03.361 ***** 2025-11-01 13:22:39.897218 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 13:22:39.897232 | orchestrator | 2025-11-01 13:22:39.897243 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-11-01 13:22:39.897254 | orchestrator | Saturday 01 November 2025 13:15:47 +0000 (0:00:02.334) 0:04:05.696 ***** 2025-11-01 13:22:39.897264 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.897275 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.897286 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.897296 | orchestrator | 2025-11-01 13:22:39.897307 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-11-01 13:22:39.897318 | orchestrator | Saturday 01 November 2025 13:15:48 +0000 (0:00:00.946) 0:04:06.643 ***** 2025-11-01 13:22:39.897349 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:22:39.897360 | orchestrator | 2025-11-01 13:22:39.897371 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-11-01 13:22:39.897382 | orchestrator | Saturday 01 November 2025 13:15:49 +0000 (0:00:01.461) 0:04:08.104 ***** 2025-11-01 13:22:39.897400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 13:22:39.897413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 13:22:39.897433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.897482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.897501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 13:22:39.897514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.897527 | orchestrator | 2025-11-01 13:22:39.897539 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-11-01 13:22:39.897551 | orchestrator | Saturday 01 November 2025 13:15:54 +0000 (0:00:05.388) 0:04:13.493 ***** 2025-11-01 13:22:39.897565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 13:22:39.897585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.897599 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.897648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 13:22:39.897664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.897677 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.897691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 13:22:39.897712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.897724 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.897736 | orchestrator | 2025-11-01 13:22:39.897748 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-11-01 13:22:39.897760 | orchestrator | Saturday 01 November 2025 13:15:56 +0000 (0:00:01.268) 0:04:14.761 ***** 2025-11-01 13:22:39.897806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 13:22:39.897825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.897838 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.897851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 13:22:39.897872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.897884 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.897928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 13:22:39.897946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.897958 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.897969 | orchestrator | 2025-11-01 13:22:39.897980 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-11-01 13:22:39.897990 | orchestrator | Saturday 01 November 2025 13:15:59 +0000 (0:00:02.851) 0:04:17.613 ***** 2025-11-01 13:22:39.898002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 13:22:39.898063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 13:22:39.898112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 13:22:39.898131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.898144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.898162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.898174 | orchestrator | 2025-11-01 13:22:39.898184 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-11-01 13:22:39.898195 | orchestrator | Saturday 01 November 2025 13:16:02 +0000 (0:00:03.015) 0:04:20.628 ***** 2025-11-01 13:22:39.898236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 13:22:39.898256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 13:22:39.898269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 13:22:39.898292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.898303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.898406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.898421 | orchestrator | 2025-11-01 13:22:39.898433 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-11-01 13:22:39.898444 | orchestrator | Saturday 01 November 2025 13:16:17 +0000 (0:00:15.390) 0:04:36.019 ***** 2025-11-01 13:22:39.898462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 13:22:39.898482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.898493 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.898505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 13:22:39.898517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.898559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-11-01 13:22:39.898579 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.898591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.898613 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.898625 | orchestrator | 2025-11-01 13:22:39.898635 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-11-01 13:22:39.898646 | orchestrator | Saturday 01 November 2025 13:16:19 +0000 (0:00:02.391) 0:04:38.412 ***** 2025-11-01 13:22:39.898657 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:39.898666 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:22:39.898676 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:22:39.898686 | orchestrator | 2025-11-01 13:22:39.898695 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-11-01 13:22:39.898705 | orchestrator | Saturday 01 November 2025 13:16:23 +0000 (0:00:03.268) 0:04:41.681 ***** 2025-11-01 13:22:39.898715 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.898724 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.898734 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.898744 | orchestrator | 2025-11-01 13:22:39.898753 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-11-01 13:22:39.898763 | orchestrator | Saturday 01 November 2025 13:16:23 +0000 (0:00:00.482) 0:04:42.163 ***** 2025-11-01 13:22:39.898774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 13:22:39.898814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 13:22:39.898837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.898848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-11-01 13:22:39.898859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.898869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.898879 | orchestrator | 2025-11-01 13:22:39.898889 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-11-01 13:22:39.898899 | orchestrator | Saturday 01 November 2025 13:16:27 +0000 (0:00:03.462) 0:04:45.626 ***** 2025-11-01 13:22:39.898908 | orchestrator | 2025-11-01 13:22:39.898918 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-11-01 13:22:39.898952 | orchestrator | Saturday 01 November 2025 13:16:27 +0000 (0:00:00.172) 0:04:45.798 ***** 2025-11-01 13:22:39.898963 | orchestrator | 2025-11-01 13:22:39.898973 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-11-01 13:22:39.898983 | orchestrator | Saturday 01 November 2025 13:16:27 +0000 (0:00:00.167) 0:04:45.966 ***** 2025-11-01 13:22:39.898992 | orchestrator | 2025-11-01 13:22:39.899002 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-11-01 13:22:39.899012 | orchestrator | Saturday 01 November 2025 13:16:27 +0000 (0:00:00.163) 0:04:46.130 ***** 2025-11-01 13:22:39.899027 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:39.899037 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:22:39.899047 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:22:39.899056 | orchestrator | 2025-11-01 13:22:39.899066 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-11-01 13:22:39.899076 | orchestrator | Saturday 01 November 2025 13:16:52 +0000 (0:00:25.165) 0:05:11.295 ***** 2025-11-01 13:22:39.899085 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:39.899095 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:22:39.899104 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:22:39.899114 | orchestrator | 2025-11-01 13:22:39.899123 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-11-01 13:22:39.899133 | orchestrator | 2025-11-01 13:22:39.899142 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-11-01 13:22:39.899152 | orchestrator | Saturday 01 November 2025 13:17:02 +0000 (0:00:09.538) 0:05:20.833 ***** 2025-11-01 13:22:39.899166 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:22:39.899177 | orchestrator | 2025-11-01 13:22:39.899187 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-11-01 13:22:39.899196 | orchestrator | Saturday 01 November 2025 13:17:04 +0000 (0:00:01.866) 0:05:22.700 ***** 2025-11-01 13:22:39.899206 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:22:39.899216 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:22:39.899225 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:22:39.899235 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.899244 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.899254 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.899263 | orchestrator | 2025-11-01 13:22:39.899273 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-11-01 13:22:39.899283 | orchestrator | Saturday 01 November 2025 13:17:06 +0000 (0:00:01.967) 0:05:24.668 ***** 2025-11-01 13:22:39.899292 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.899301 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.899311 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.899321 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:22:39.899347 | orchestrator | 2025-11-01 13:22:39.899357 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-11-01 13:22:39.899367 | orchestrator | Saturday 01 November 2025 13:17:08 +0000 (0:00:02.301) 0:05:26.969 ***** 2025-11-01 13:22:39.899377 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-11-01 13:22:39.899386 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-11-01 13:22:39.899396 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-11-01 13:22:39.899406 | orchestrator | 2025-11-01 13:22:39.899415 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-11-01 13:22:39.899425 | orchestrator | Saturday 01 November 2025 13:17:09 +0000 (0:00:01.514) 0:05:28.484 ***** 2025-11-01 13:22:39.899434 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-11-01 13:22:39.899444 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-11-01 13:22:39.899454 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-11-01 13:22:39.899463 | orchestrator | 2025-11-01 13:22:39.899473 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-11-01 13:22:39.899483 | orchestrator | Saturday 01 November 2025 13:17:11 +0000 (0:00:01.701) 0:05:30.185 ***** 2025-11-01 13:22:39.899492 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-11-01 13:22:39.899502 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:22:39.899511 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-11-01 13:22:39.899521 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:22:39.899530 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-11-01 13:22:39.899545 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:22:39.899555 | orchestrator | 2025-11-01 13:22:39.899565 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-11-01 13:22:39.899574 | orchestrator | Saturday 01 November 2025 13:17:13 +0000 (0:00:01.885) 0:05:32.071 ***** 2025-11-01 13:22:39.899584 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-11-01 13:22:39.899594 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-01 13:22:39.899603 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-11-01 13:22:39.899613 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-01 13:22:39.899622 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.899632 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-11-01 13:22:39.899642 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-01 13:22:39.899651 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-01 13:22:39.899661 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-11-01 13:22:39.899670 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-11-01 13:22:39.899680 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.899690 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-11-01 13:22:39.899726 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-11-01 13:22:39.899737 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.899747 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-11-01 13:22:39.899756 | orchestrator | 2025-11-01 13:22:39.899766 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-11-01 13:22:39.899776 | orchestrator | Saturday 01 November 2025 13:17:15 +0000 (0:00:02.228) 0:05:34.299 ***** 2025-11-01 13:22:39.899785 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:22:39.899795 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.899805 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.899814 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.899824 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:22:39.899833 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:22:39.899843 | orchestrator | 2025-11-01 13:22:39.899852 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-11-01 13:22:39.899862 | orchestrator | Saturday 01 November 2025 13:17:18 +0000 (0:00:02.900) 0:05:37.200 ***** 2025-11-01 13:22:39.899871 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.899881 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.899891 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.899900 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:22:39.899910 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:22:39.899919 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:22:39.899929 | orchestrator | 2025-11-01 13:22:39.899938 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-11-01 13:22:39.899952 | orchestrator | Saturday 01 November 2025 13:17:20 +0000 (0:00:02.013) 0:05:39.214 ***** 2025-11-01 13:22:39.899963 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 13:22:39.899980 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 13:22:39.899991 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900047 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900071 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900087 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900105 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900126 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900162 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900239 | orchestrator | 2025-11-01 13:22:39.900249 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-11-01 13:22:39.900258 | orchestrator | Saturday 01 November 2025 13:17:25 +0000 (0:00:04.941) 0:05:44.156 ***** 2025-11-01 13:22:39.900268 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:22:39.900279 | orchestrator | 2025-11-01 13:22:39.900289 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-11-01 13:22:39.900298 | orchestrator | Saturday 01 November 2025 13:17:29 +0000 (0:00:03.549) 0:05:47.705 ***** 2025-11-01 13:22:39.900352 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900370 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900387 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900398 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900408 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900454 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900496 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900516 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.900592 | orchestrator | 2025-11-01 13:22:39.900601 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-11-01 13:22:39.900611 | orchestrator | Saturday 01 November 2025 13:17:37 +0000 (0:00:07.833) 0:05:55.538 ***** 2025-11-01 13:22:39.900621 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 13:22:39.900632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 13:22:39.900643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.900676 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:22:39.900687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 13:22:39.900711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 13:22:39.900721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.900731 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:22:39.900741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 13:22:39.900752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 13:22:39.900788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.900805 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:22:39.900815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-01 13:22:39.900830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.900840 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.900850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-01 13:22:39.900860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.900870 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.900880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-01 13:22:39.900914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.900925 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.900940 | orchestrator | 2025-11-01 13:22:39.900950 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-11-01 13:22:39.900960 | orchestrator | Saturday 01 November 2025 13:17:41 +0000 (0:00:04.529) 0:06:00.067 ***** 2025-11-01 13:22:39.900970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 13:22:39.900984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 13:22:39.900995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.901005 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:22:39.901015 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 13:22:39.901025 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 13:22:39.901065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.901076 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:22:39.901090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-01 13:22:39.901101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.901111 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.901121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-01 13:22:39.901131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 13:22:39.901141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.901183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 13:22:39.901195 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.901209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.901220 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:22:39.901230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-01 13:22:39.901240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.901250 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.901259 | orchestrator | 2025-11-01 13:22:39.901269 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-11-01 13:22:39.901279 | orchestrator | Saturday 01 November 2025 13:17:46 +0000 (0:00:05.039) 0:06:05.107 ***** 2025-11-01 13:22:39.901289 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.901298 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.901308 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.901318 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-11-01 13:22:39.901327 | orchestrator | 2025-11-01 13:22:39.901353 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-11-01 13:22:39.901362 | orchestrator | Saturday 01 November 2025 13:17:49 +0000 (0:00:03.244) 0:06:08.352 ***** 2025-11-01 13:22:39.901372 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-01 13:22:39.901385 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-01 13:22:39.901395 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-01 13:22:39.901405 | orchestrator | 2025-11-01 13:22:39.901414 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-11-01 13:22:39.901424 | orchestrator | Saturday 01 November 2025 13:17:52 +0000 (0:00:02.941) 0:06:11.293 ***** 2025-11-01 13:22:39.901433 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-01 13:22:39.901443 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-11-01 13:22:39.901452 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-11-01 13:22:39.901461 | orchestrator | 2025-11-01 13:22:39.901471 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-11-01 13:22:39.901480 | orchestrator | Saturday 01 November 2025 13:17:55 +0000 (0:00:02.897) 0:06:14.191 ***** 2025-11-01 13:22:39.901490 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:22:39.901499 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:22:39.901509 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:22:39.901518 | orchestrator | 2025-11-01 13:22:39.901528 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-11-01 13:22:39.901537 | orchestrator | Saturday 01 November 2025 13:17:56 +0000 (0:00:01.084) 0:06:15.275 ***** 2025-11-01 13:22:39.901547 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:22:39.901557 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:22:39.901566 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:22:39.901575 | orchestrator | 2025-11-01 13:22:39.901610 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-11-01 13:22:39.901621 | orchestrator | Saturday 01 November 2025 13:17:58 +0000 (0:00:01.851) 0:06:17.127 ***** 2025-11-01 13:22:39.901631 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-11-01 13:22:39.901641 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-11-01 13:22:39.901651 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-11-01 13:22:39.901660 | orchestrator | 2025-11-01 13:22:39.901670 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-11-01 13:22:39.901679 | orchestrator | Saturday 01 November 2025 13:18:00 +0000 (0:00:01.653) 0:06:18.781 ***** 2025-11-01 13:22:39.901689 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-11-01 13:22:39.901698 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-11-01 13:22:39.901708 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-11-01 13:22:39.901717 | orchestrator | 2025-11-01 13:22:39.901727 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-11-01 13:22:39.901737 | orchestrator | Saturday 01 November 2025 13:18:02 +0000 (0:00:01.833) 0:06:20.614 ***** 2025-11-01 13:22:39.901746 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-11-01 13:22:39.901756 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-11-01 13:22:39.901765 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-11-01 13:22:39.901775 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-11-01 13:22:39.901789 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-11-01 13:22:39.901798 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-11-01 13:22:39.901808 | orchestrator | 2025-11-01 13:22:39.901817 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-11-01 13:22:39.901827 | orchestrator | Saturday 01 November 2025 13:18:08 +0000 (0:00:05.949) 0:06:26.564 ***** 2025-11-01 13:22:39.901836 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:22:39.901846 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:22:39.901855 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:22:39.901865 | orchestrator | 2025-11-01 13:22:39.901874 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-11-01 13:22:39.901884 | orchestrator | Saturday 01 November 2025 13:18:08 +0000 (0:00:00.471) 0:06:27.035 ***** 2025-11-01 13:22:39.901894 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:22:39.901908 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:22:39.901918 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:22:39.901927 | orchestrator | 2025-11-01 13:22:39.901937 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-11-01 13:22:39.901947 | orchestrator | Saturday 01 November 2025 13:18:08 +0000 (0:00:00.332) 0:06:27.368 ***** 2025-11-01 13:22:39.901956 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:22:39.901966 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:22:39.901975 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:22:39.901985 | orchestrator | 2025-11-01 13:22:39.901994 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-11-01 13:22:39.902004 | orchestrator | Saturday 01 November 2025 13:18:10 +0000 (0:00:01.720) 0:06:29.089 ***** 2025-11-01 13:22:39.902013 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-11-01 13:22:39.902051 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-11-01 13:22:39.902061 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-11-01 13:22:39.902071 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-11-01 13:22:39.902080 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-11-01 13:22:39.902090 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-11-01 13:22:39.902100 | orchestrator | 2025-11-01 13:22:39.902109 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-11-01 13:22:39.902119 | orchestrator | Saturday 01 November 2025 13:18:14 +0000 (0:00:03.867) 0:06:32.957 ***** 2025-11-01 13:22:39.902128 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-01 13:22:39.902138 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-01 13:22:39.902147 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-01 13:22:39.902157 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-11-01 13:22:39.902166 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:22:39.902175 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-11-01 13:22:39.902185 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:22:39.902194 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-11-01 13:22:39.902204 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:22:39.902213 | orchestrator | 2025-11-01 13:22:39.902223 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-11-01 13:22:39.902232 | orchestrator | Saturday 01 November 2025 13:18:19 +0000 (0:00:04.800) 0:06:37.757 ***** 2025-11-01 13:22:39.902242 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:22:39.902251 | orchestrator | 2025-11-01 13:22:39.902260 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-11-01 13:22:39.902270 | orchestrator | Saturday 01 November 2025 13:18:19 +0000 (0:00:00.183) 0:06:37.940 ***** 2025-11-01 13:22:39.902280 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:22:39.902289 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:22:39.902298 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:22:39.902380 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.902393 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.902403 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.902412 | orchestrator | 2025-11-01 13:22:39.902422 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-11-01 13:22:39.902432 | orchestrator | Saturday 01 November 2025 13:18:20 +0000 (0:00:00.687) 0:06:38.627 ***** 2025-11-01 13:22:39.902441 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-11-01 13:22:39.902459 | orchestrator | 2025-11-01 13:22:39.902469 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-11-01 13:22:39.902478 | orchestrator | Saturday 01 November 2025 13:18:20 +0000 (0:00:00.805) 0:06:39.432 ***** 2025-11-01 13:22:39.902488 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:22:39.902498 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:22:39.902507 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:22:39.902517 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.902526 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.902536 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.902545 | orchestrator | 2025-11-01 13:22:39.902555 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-11-01 13:22:39.902565 | orchestrator | Saturday 01 November 2025 13:18:21 +0000 (0:00:01.039) 0:06:40.472 ***** 2025-11-01 13:22:39.902580 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902591 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902599 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902638 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902655 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902663 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902710 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902719 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902727 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902735 | orchestrator | 2025-11-01 13:22:39.902743 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-11-01 13:22:39.902751 | orchestrator | Saturday 01 November 2025 13:18:25 +0000 (0:00:03.939) 0:06:44.411 ***** 2025-11-01 13:22:39.902759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 13:22:39.902776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 13:22:39.902789 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 13:22:39.902797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 13:22:39.902805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 13:22:39.902814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 13:22:39.902831 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902839 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902860 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.902921 | orchestrator | 2025-11-01 13:22:39.902932 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-11-01 13:22:39.902940 | orchestrator | Saturday 01 November 2025 13:18:33 +0000 (0:00:07.964) 0:06:52.376 ***** 2025-11-01 13:22:39.902948 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:22:39.902956 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:22:39.902964 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:22:39.902972 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.902979 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.902987 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.902995 | orchestrator | 2025-11-01 13:22:39.903003 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-11-01 13:22:39.903011 | orchestrator | Saturday 01 November 2025 13:18:35 +0000 (0:00:01.513) 0:06:53.889 ***** 2025-11-01 13:22:39.903018 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-11-01 13:22:39.903026 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-11-01 13:22:39.903035 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-11-01 13:22:39.903043 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-11-01 13:22:39.903051 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-11-01 13:22:39.903058 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-11-01 13:22:39.903066 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.903074 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-11-01 13:22:39.903082 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-11-01 13:22:39.903090 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.903102 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-11-01 13:22:39.903110 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.903118 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-11-01 13:22:39.903126 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-11-01 13:22:39.903134 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-11-01 13:22:39.903142 | orchestrator | 2025-11-01 13:22:39.903150 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-11-01 13:22:39.903157 | orchestrator | Saturday 01 November 2025 13:18:39 +0000 (0:00:03.887) 0:06:57.777 ***** 2025-11-01 13:22:39.903165 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:22:39.903173 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:22:39.903181 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:22:39.903189 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.903196 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.903204 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.903212 | orchestrator | 2025-11-01 13:22:39.903220 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-11-01 13:22:39.903228 | orchestrator | Saturday 01 November 2025 13:18:39 +0000 (0:00:00.644) 0:06:58.421 ***** 2025-11-01 13:22:39.903235 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-11-01 13:22:39.903244 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-11-01 13:22:39.903251 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-11-01 13:22:39.903259 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-11-01 13:22:39.903267 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-11-01 13:22:39.903278 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-11-01 13:22:39.903286 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-11-01 13:22:39.903294 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-11-01 13:22:39.903302 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-11-01 13:22:39.903310 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.903318 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-11-01 13:22:39.903326 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-11-01 13:22:39.903348 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-11-01 13:22:39.903356 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.903364 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-11-01 13:22:39.903372 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.903383 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-11-01 13:22:39.903391 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-11-01 13:22:39.903399 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-11-01 13:22:39.903412 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-11-01 13:22:39.903420 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-11-01 13:22:39.903428 | orchestrator | 2025-11-01 13:22:39.903436 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-11-01 13:22:39.903443 | orchestrator | Saturday 01 November 2025 13:18:45 +0000 (0:00:05.959) 0:07:04.380 ***** 2025-11-01 13:22:39.903451 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-11-01 13:22:39.903459 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-11-01 13:22:39.903467 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-11-01 13:22:39.903475 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-01 13:22:39.903483 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-01 13:22:39.903491 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-11-01 13:22:39.903499 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-11-01 13:22:39.903507 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-11-01 13:22:39.903514 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-11-01 13:22:39.903522 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-01 13:22:39.903530 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-01 13:22:39.903538 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-11-01 13:22:39.903546 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-01 13:22:39.903554 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-11-01 13:22:39.903561 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.903570 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-11-01 13:22:39.903577 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.903585 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-01 13:22:39.903593 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-11-01 13:22:39.903601 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-11-01 13:22:39.903608 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.903617 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-01 13:22:39.903625 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-01 13:22:39.903632 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-11-01 13:22:39.903640 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-01 13:22:39.903648 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-01 13:22:39.903659 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-11-01 13:22:39.903667 | orchestrator | 2025-11-01 13:22:39.903675 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-11-01 13:22:39.903683 | orchestrator | Saturday 01 November 2025 13:18:55 +0000 (0:00:09.248) 0:07:13.628 ***** 2025-11-01 13:22:39.903691 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:22:39.903699 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:22:39.903707 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:22:39.903719 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.903727 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.903735 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.903743 | orchestrator | 2025-11-01 13:22:39.903751 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-11-01 13:22:39.903759 | orchestrator | Saturday 01 November 2025 13:18:56 +0000 (0:00:00.922) 0:07:14.551 ***** 2025-11-01 13:22:39.903767 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:22:39.903775 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:22:39.903783 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:22:39.903790 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.903798 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.903806 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.903814 | orchestrator | 2025-11-01 13:22:39.903822 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-11-01 13:22:39.903830 | orchestrator | Saturday 01 November 2025 13:18:56 +0000 (0:00:00.681) 0:07:15.232 ***** 2025-11-01 13:22:39.903838 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:22:39.903846 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.903853 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.903865 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:22:39.903873 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:22:39.903881 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.903889 | orchestrator | 2025-11-01 13:22:39.903897 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-11-01 13:22:39.903904 | orchestrator | Saturday 01 November 2025 13:18:59 +0000 (0:00:02.803) 0:07:18.036 ***** 2025-11-01 13:22:39.903913 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 13:22:39.903921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 13:22:39.903930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 13:22:39.903949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.903958 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:22:39.903966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 13:22:39.903978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.903987 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:22:39.903995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-11-01 13:22:39.904004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-11-01 13:22:39.904012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.904028 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:22:39.904037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-01 13:22:39.904049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.904057 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.904065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-01 13:22:39.904074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.904082 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.904090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-11-01 13:22:39.904098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-11-01 13:22:39.904111 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.904119 | orchestrator | 2025-11-01 13:22:39.904128 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-11-01 13:22:39.904136 | orchestrator | Saturday 01 November 2025 13:19:03 +0000 (0:00:03.558) 0:07:21.594 ***** 2025-11-01 13:22:39.904143 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-11-01 13:22:39.904151 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-11-01 13:22:39.904159 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:22:39.904167 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-11-01 13:22:39.904178 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-11-01 13:22:39.904186 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:22:39.904194 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-11-01 13:22:39.904202 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-11-01 13:22:39.904210 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:22:39.904218 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-11-01 13:22:39.904225 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-11-01 13:22:39.904233 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-11-01 13:22:39.904241 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-11-01 13:22:39.904249 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.904257 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.904265 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-11-01 13:22:39.904273 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-11-01 13:22:39.904280 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.904288 | orchestrator | 2025-11-01 13:22:39.904296 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-11-01 13:22:39.904304 | orchestrator | Saturday 01 November 2025 13:19:04 +0000 (0:00:01.561) 0:07:23.156 ***** 2025-11-01 13:22:39.904316 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 13:22:39.904325 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 13:22:39.904350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-11-01 13:22:39.904363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 13:22:39.904372 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 13:22:39.904384 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 13:22:39.904392 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-11-01 13:22:39.904401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 13:22:39.904416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.904424 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.904436 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.904448 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.904457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-11-01 13:22:39.904465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.904479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-11-01 13:22:39.904487 | orchestrator | 2025-11-01 13:22:39.904495 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-11-01 13:22:39.904503 | orchestrator | Saturday 01 November 2025 13:19:09 +0000 (0:00:04.801) 0:07:27.957 ***** 2025-11-01 13:22:39.904511 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:22:39.904519 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:22:39.904527 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:22:39.904535 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.904543 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.904550 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.904558 | orchestrator | 2025-11-01 13:22:39.904566 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-01 13:22:39.904574 | orchestrator | Saturday 01 November 2025 13:19:11 +0000 (0:00:01.711) 0:07:29.668 ***** 2025-11-01 13:22:39.904582 | orchestrator | 2025-11-01 13:22:39.904590 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-01 13:22:39.904598 | orchestrator | Saturday 01 November 2025 13:19:11 +0000 (0:00:00.380) 0:07:30.048 ***** 2025-11-01 13:22:39.904605 | orchestrator | 2025-11-01 13:22:39.904613 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-01 13:22:39.904621 | orchestrator | Saturday 01 November 2025 13:19:11 +0000 (0:00:00.379) 0:07:30.428 ***** 2025-11-01 13:22:39.904629 | orchestrator | 2025-11-01 13:22:39.904637 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-01 13:22:39.904645 | orchestrator | Saturday 01 November 2025 13:19:12 +0000 (0:00:00.327) 0:07:30.755 ***** 2025-11-01 13:22:39.904653 | orchestrator | 2025-11-01 13:22:39.904664 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-01 13:22:39.904672 | orchestrator | Saturday 01 November 2025 13:19:12 +0000 (0:00:00.441) 0:07:31.197 ***** 2025-11-01 13:22:39.904680 | orchestrator | 2025-11-01 13:22:39.904688 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-11-01 13:22:39.904696 | orchestrator | Saturday 01 November 2025 13:19:12 +0000 (0:00:00.189) 0:07:31.386 ***** 2025-11-01 13:22:39.904703 | orchestrator | 2025-11-01 13:22:39.904711 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-11-01 13:22:39.904719 | orchestrator | Saturday 01 November 2025 13:19:13 +0000 (0:00:00.553) 0:07:31.940 ***** 2025-11-01 13:22:39.904727 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:39.904735 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:22:39.904743 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:22:39.904751 | orchestrator | 2025-11-01 13:22:39.904759 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-11-01 13:22:39.904766 | orchestrator | Saturday 01 November 2025 13:19:29 +0000 (0:00:16.302) 0:07:48.243 ***** 2025-11-01 13:22:39.904774 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:39.904782 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:22:39.904790 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:22:39.904803 | orchestrator | 2025-11-01 13:22:39.904811 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-11-01 13:22:39.904818 | orchestrator | Saturday 01 November 2025 13:19:49 +0000 (0:00:19.743) 0:08:07.986 ***** 2025-11-01 13:22:39.904826 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:22:39.904834 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:22:39.904842 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:22:39.904850 | orchestrator | 2025-11-01 13:22:39.904861 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-11-01 13:22:39.904869 | orchestrator | Saturday 01 November 2025 13:20:10 +0000 (0:00:21.094) 0:08:29.081 ***** 2025-11-01 13:22:39.904877 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:22:39.904885 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:22:39.904893 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:22:39.904901 | orchestrator | 2025-11-01 13:22:39.904908 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-11-01 13:22:39.904916 | orchestrator | Saturday 01 November 2025 13:20:41 +0000 (0:00:31.259) 0:09:00.340 ***** 2025-11-01 13:22:39.904924 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:22:39.904932 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:22:39.904940 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:22:39.904948 | orchestrator | 2025-11-01 13:22:39.904956 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-11-01 13:22:39.904964 | orchestrator | Saturday 01 November 2025 13:20:42 +0000 (0:00:00.790) 0:09:01.130 ***** 2025-11-01 13:22:39.904971 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:22:39.904979 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:22:39.904987 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:22:39.904995 | orchestrator | 2025-11-01 13:22:39.905003 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-11-01 13:22:39.905011 | orchestrator | Saturday 01 November 2025 13:20:43 +0000 (0:00:00.840) 0:09:01.970 ***** 2025-11-01 13:22:39.905019 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:22:39.905026 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:22:39.905034 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:22:39.905042 | orchestrator | 2025-11-01 13:22:39.905050 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-11-01 13:22:39.905058 | orchestrator | Saturday 01 November 2025 13:21:08 +0000 (0:00:24.648) 0:09:26.619 ***** 2025-11-01 13:22:39.905066 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:22:39.905074 | orchestrator | 2025-11-01 13:22:39.905082 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-11-01 13:22:39.905089 | orchestrator | Saturday 01 November 2025 13:21:08 +0000 (0:00:00.214) 0:09:26.834 ***** 2025-11-01 13:22:39.905098 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.905105 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:22:39.905113 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:22:39.905121 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.905129 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.905137 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-11-01 13:22:39.905145 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-01 13:22:39.905153 | orchestrator | 2025-11-01 13:22:39.905160 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-11-01 13:22:39.905168 | orchestrator | Saturday 01 November 2025 13:21:35 +0000 (0:00:26.995) 0:09:53.829 ***** 2025-11-01 13:22:39.905176 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:22:39.905184 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:22:39.905192 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:22:39.905200 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.905207 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.905215 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.905223 | orchestrator | 2025-11-01 13:22:39.905235 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-11-01 13:22:39.905243 | orchestrator | Saturday 01 November 2025 13:21:48 +0000 (0:00:13.087) 0:10:06.916 ***** 2025-11-01 13:22:39.905251 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:22:39.905259 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.905267 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.905274 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:22:39.905282 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.905290 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-11-01 13:22:39.905298 | orchestrator | 2025-11-01 13:22:39.905306 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-11-01 13:22:39.905314 | orchestrator | Saturday 01 November 2025 13:21:56 +0000 (0:00:08.254) 0:10:15.171 ***** 2025-11-01 13:22:39.905322 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-01 13:22:39.905363 | orchestrator | 2025-11-01 13:22:39.905376 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-11-01 13:22:39.905384 | orchestrator | Saturday 01 November 2025 13:22:12 +0000 (0:00:15.847) 0:10:31.018 ***** 2025-11-01 13:22:39.905392 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-01 13:22:39.905400 | orchestrator | 2025-11-01 13:22:39.905408 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-11-01 13:22:39.905416 | orchestrator | Saturday 01 November 2025 13:22:14 +0000 (0:00:01.520) 0:10:32.539 ***** 2025-11-01 13:22:39.905424 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:22:39.905432 | orchestrator | 2025-11-01 13:22:39.905439 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-11-01 13:22:39.905447 | orchestrator | Saturday 01 November 2025 13:22:15 +0000 (0:00:01.582) 0:10:34.122 ***** 2025-11-01 13:22:39.905455 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-01 13:22:39.905463 | orchestrator | 2025-11-01 13:22:39.905471 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-11-01 13:22:39.905479 | orchestrator | Saturday 01 November 2025 13:22:29 +0000 (0:00:13.759) 0:10:47.882 ***** 2025-11-01 13:22:39.905486 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:22:39.905494 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:22:39.905502 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:22:39.905510 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:22:39.905518 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:22:39.905526 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:22:39.905533 | orchestrator | 2025-11-01 13:22:39.905541 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-11-01 13:22:39.905549 | orchestrator | 2025-11-01 13:22:39.905561 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-11-01 13:22:39.905569 | orchestrator | Saturday 01 November 2025 13:22:31 +0000 (0:00:01.978) 0:10:49.860 ***** 2025-11-01 13:22:39.905577 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:22:39.905585 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:22:39.905592 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:22:39.905600 | orchestrator | 2025-11-01 13:22:39.905608 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-11-01 13:22:39.905616 | orchestrator | 2025-11-01 13:22:39.905624 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-11-01 13:22:39.905631 | orchestrator | Saturday 01 November 2025 13:22:32 +0000 (0:00:01.261) 0:10:51.122 ***** 2025-11-01 13:22:39.905639 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.905647 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.905655 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.905663 | orchestrator | 2025-11-01 13:22:39.905670 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-11-01 13:22:39.905678 | orchestrator | 2025-11-01 13:22:39.905686 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-11-01 13:22:39.905699 | orchestrator | Saturday 01 November 2025 13:22:33 +0000 (0:00:00.665) 0:10:51.787 ***** 2025-11-01 13:22:39.905707 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-11-01 13:22:39.905715 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-11-01 13:22:39.905723 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-11-01 13:22:39.905730 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-11-01 13:22:39.905738 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-11-01 13:22:39.905746 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-11-01 13:22:39.905754 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:22:39.905762 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-11-01 13:22:39.905770 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-11-01 13:22:39.905777 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-11-01 13:22:39.905785 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-11-01 13:22:39.905793 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-11-01 13:22:39.905801 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-11-01 13:22:39.905809 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:22:39.905817 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-11-01 13:22:39.905824 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-11-01 13:22:39.905832 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-11-01 13:22:39.905840 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-11-01 13:22:39.905848 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-11-01 13:22:39.905856 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-11-01 13:22:39.905863 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:22:39.905869 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-11-01 13:22:39.905876 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-11-01 13:22:39.905882 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-11-01 13:22:39.905889 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-11-01 13:22:39.905896 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-11-01 13:22:39.905902 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-11-01 13:22:39.905909 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.905916 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-11-01 13:22:39.905922 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-11-01 13:22:39.905929 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-11-01 13:22:39.905935 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-11-01 13:22:39.905942 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-11-01 13:22:39.905949 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-11-01 13:22:39.905955 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.905965 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-11-01 13:22:39.905972 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-11-01 13:22:39.905978 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-11-01 13:22:39.905985 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-11-01 13:22:39.905992 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-11-01 13:22:39.905998 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-11-01 13:22:39.906005 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.906012 | orchestrator | 2025-11-01 13:22:39.906037 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-11-01 13:22:39.906048 | orchestrator | 2025-11-01 13:22:39.906055 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-11-01 13:22:39.906062 | orchestrator | Saturday 01 November 2025 13:22:34 +0000 (0:00:01.531) 0:10:53.318 ***** 2025-11-01 13:22:39.906068 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-11-01 13:22:39.906075 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-11-01 13:22:39.906082 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.906088 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-11-01 13:22:39.906095 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-11-01 13:22:39.906102 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.906108 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-11-01 13:22:39.906115 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-11-01 13:22:39.906125 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.906132 | orchestrator | 2025-11-01 13:22:39.906138 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-11-01 13:22:39.906145 | orchestrator | 2025-11-01 13:22:39.906151 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-11-01 13:22:39.906158 | orchestrator | Saturday 01 November 2025 13:22:35 +0000 (0:00:00.835) 0:10:54.154 ***** 2025-11-01 13:22:39.906165 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.906171 | orchestrator | 2025-11-01 13:22:39.906178 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-11-01 13:22:39.906185 | orchestrator | 2025-11-01 13:22:39.906191 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-11-01 13:22:39.906198 | orchestrator | Saturday 01 November 2025 13:22:36 +0000 (0:00:00.763) 0:10:54.918 ***** 2025-11-01 13:22:39.906205 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:22:39.906211 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:22:39.906218 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:22:39.906225 | orchestrator | 2025-11-01 13:22:39.906231 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:22:39.906238 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:22:39.906245 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-11-01 13:22:39.906252 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-11-01 13:22:39.906259 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-11-01 13:22:39.906266 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-11-01 13:22:39.906272 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-11-01 13:22:39.906279 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-11-01 13:22:39.906285 | orchestrator | 2025-11-01 13:22:39.906292 | orchestrator | 2025-11-01 13:22:39.906299 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:22:39.906305 | orchestrator | Saturday 01 November 2025 13:22:36 +0000 (0:00:00.470) 0:10:55.388 ***** 2025-11-01 13:22:39.906312 | orchestrator | =============================================================================== 2025-11-01 13:22:39.906319 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 35.45s 2025-11-01 13:22:39.906325 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 31.26s 2025-11-01 13:22:39.906348 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 27.00s 2025-11-01 13:22:39.906355 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 25.17s 2025-11-01 13:22:39.906362 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 24.65s 2025-11-01 13:22:39.906369 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 24.24s 2025-11-01 13:22:39.906375 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 22.99s 2025-11-01 13:22:39.906382 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.09s 2025-11-01 13:22:39.906389 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.74s 2025-11-01 13:22:39.906395 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 18.48s 2025-11-01 13:22:39.906406 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 16.43s 2025-11-01 13:22:39.906413 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 16.39s 2025-11-01 13:22:39.906419 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 16.30s 2025-11-01 13:22:39.906426 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.85s 2025-11-01 13:22:39.906433 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 15.39s 2025-11-01 13:22:39.906439 | orchestrator | nova-cell : Create cell ------------------------------------------------ 15.14s 2025-11-01 13:22:39.906446 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.76s 2025-11-01 13:22:39.906453 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 13.09s 2025-11-01 13:22:39.906459 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 11.19s 2025-11-01 13:22:39.906466 | orchestrator | nova : Restart nova-api container --------------------------------------- 9.54s 2025-11-01 13:22:39.906473 | orchestrator | 2025-11-01 13:22:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:22:42.935032 | orchestrator | 2025-11-01 13:22:42 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:22:42.936911 | orchestrator | 2025-11-01 13:22:42 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:22:42.938686 | orchestrator | 2025-11-01 13:22:42 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:22:42.938711 | orchestrator | 2025-11-01 13:22:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:22:45.976834 | orchestrator | 2025-11-01 13:22:45 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:22:45.977558 | orchestrator | 2025-11-01 13:22:45 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:22:45.978445 | orchestrator | 2025-11-01 13:22:45 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:22:45.978691 | orchestrator | 2025-11-01 13:22:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:22:49.033221 | orchestrator | 2025-11-01 13:22:49 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:22:49.035096 | orchestrator | 2025-11-01 13:22:49 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:22:49.037285 | orchestrator | 2025-11-01 13:22:49 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:22:49.037590 | orchestrator | 2025-11-01 13:22:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:22:52.074378 | orchestrator | 2025-11-01 13:22:52 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:22:52.075561 | orchestrator | 2025-11-01 13:22:52 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:22:52.076964 | orchestrator | 2025-11-01 13:22:52 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:22:52.076994 | orchestrator | 2025-11-01 13:22:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:22:55.115318 | orchestrator | 2025-11-01 13:22:55 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:22:55.116561 | orchestrator | 2025-11-01 13:22:55 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:22:55.117913 | orchestrator | 2025-11-01 13:22:55 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:22:55.117950 | orchestrator | 2025-11-01 13:22:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:22:58.163907 | orchestrator | 2025-11-01 13:22:58 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:22:58.165163 | orchestrator | 2025-11-01 13:22:58 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:22:58.166849 | orchestrator | 2025-11-01 13:22:58 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:22:58.166895 | orchestrator | 2025-11-01 13:22:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:23:01.208209 | orchestrator | 2025-11-01 13:23:01 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:23:01.210432 | orchestrator | 2025-11-01 13:23:01 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:23:01.212453 | orchestrator | 2025-11-01 13:23:01 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:23:01.212487 | orchestrator | 2025-11-01 13:23:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:23:04.265031 | orchestrator | 2025-11-01 13:23:04 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:23:04.265608 | orchestrator | 2025-11-01 13:23:04 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:23:04.266733 | orchestrator | 2025-11-01 13:23:04 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:23:04.266765 | orchestrator | 2025-11-01 13:23:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:23:07.308726 | orchestrator | 2025-11-01 13:23:07 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:23:07.311541 | orchestrator | 2025-11-01 13:23:07 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:23:07.312464 | orchestrator | 2025-11-01 13:23:07 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:23:07.312483 | orchestrator | 2025-11-01 13:23:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:23:10.354935 | orchestrator | 2025-11-01 13:23:10 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:23:10.359482 | orchestrator | 2025-11-01 13:23:10 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:23:10.361379 | orchestrator | 2025-11-01 13:23:10 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:23:10.361408 | orchestrator | 2025-11-01 13:23:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:23:13.408266 | orchestrator | 2025-11-01 13:23:13 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:23:13.409568 | orchestrator | 2025-11-01 13:23:13 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:23:13.410897 | orchestrator | 2025-11-01 13:23:13 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:23:13.410954 | orchestrator | 2025-11-01 13:23:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:23:16.453042 | orchestrator | 2025-11-01 13:23:16 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:23:16.456483 | orchestrator | 2025-11-01 13:23:16 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:23:16.458883 | orchestrator | 2025-11-01 13:23:16 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:23:16.458994 | orchestrator | 2025-11-01 13:23:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:23:19.492725 | orchestrator | 2025-11-01 13:23:19 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:23:19.493797 | orchestrator | 2025-11-01 13:23:19 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:23:19.495744 | orchestrator | 2025-11-01 13:23:19 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:23:19.495845 | orchestrator | 2025-11-01 13:23:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:23:22.529454 | orchestrator | 2025-11-01 13:23:22 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:23:22.530779 | orchestrator | 2025-11-01 13:23:22 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:23:22.533500 | orchestrator | 2025-11-01 13:23:22 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:23:22.533588 | orchestrator | 2025-11-01 13:23:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:23:25.572533 | orchestrator | 2025-11-01 13:23:25 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:23:25.573987 | orchestrator | 2025-11-01 13:23:25 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:23:25.575153 | orchestrator | 2025-11-01 13:23:25 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:23:25.575185 | orchestrator | 2025-11-01 13:23:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:23:28.611016 | orchestrator | 2025-11-01 13:23:28 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:23:28.611476 | orchestrator | 2025-11-01 13:23:28 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:23:28.612749 | orchestrator | 2025-11-01 13:23:28 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:23:28.612774 | orchestrator | 2025-11-01 13:23:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:23:31.646562 | orchestrator | 2025-11-01 13:23:31 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:23:31.647463 | orchestrator | 2025-11-01 13:23:31 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:23:31.648652 | orchestrator | 2025-11-01 13:23:31 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:23:31.648674 | orchestrator | 2025-11-01 13:23:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:23:34.682213 | orchestrator | 2025-11-01 13:23:34 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:23:34.683304 | orchestrator | 2025-11-01 13:23:34 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:23:34.685074 | orchestrator | 2025-11-01 13:23:34 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:23:34.685102 | orchestrator | 2025-11-01 13:23:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:23:37.723745 | orchestrator | 2025-11-01 13:23:37 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:23:37.724521 | orchestrator | 2025-11-01 13:23:37 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:23:37.725706 | orchestrator | 2025-11-01 13:23:37 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:23:37.725736 | orchestrator | 2025-11-01 13:23:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:23:40.761095 | orchestrator | 2025-11-01 13:23:40 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:23:40.762526 | orchestrator | 2025-11-01 13:23:40 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:23:40.763842 | orchestrator | 2025-11-01 13:23:40 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state STARTED 2025-11-01 13:23:40.763868 | orchestrator | 2025-11-01 13:23:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:23:43.814554 | orchestrator | 2025-11-01 13:23:43 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:23:43.815994 | orchestrator | 2025-11-01 13:23:43 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:23:43.821033 | orchestrator | 2025-11-01 13:23:43 | INFO  | Task 31f51de1-c719-4bc3-b8ac-972997375d7f is in state SUCCESS 2025-11-01 13:23:43.822875 | orchestrator | 2025-11-01 13:23:43.822919 | orchestrator | 2025-11-01 13:23:43.822931 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:23:43.822943 | orchestrator | 2025-11-01 13:23:43.822954 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:23:43.822966 | orchestrator | Saturday 01 November 2025 13:21:14 +0000 (0:00:00.373) 0:00:00.373 ***** 2025-11-01 13:23:43.822977 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:23:43.822989 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:23:43.823000 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:23:43.823011 | orchestrator | 2025-11-01 13:23:43.823022 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:23:43.823033 | orchestrator | Saturday 01 November 2025 13:21:15 +0000 (0:00:00.330) 0:00:00.703 ***** 2025-11-01 13:23:43.823044 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-11-01 13:23:43.823054 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-11-01 13:23:43.823065 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-11-01 13:23:43.823076 | orchestrator | 2025-11-01 13:23:43.823087 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-11-01 13:23:43.823097 | orchestrator | 2025-11-01 13:23:43.823108 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-11-01 13:23:43.823119 | orchestrator | Saturday 01 November 2025 13:21:15 +0000 (0:00:00.481) 0:00:01.185 ***** 2025-11-01 13:23:43.823129 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:23:43.823141 | orchestrator | 2025-11-01 13:23:43.823152 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-11-01 13:23:43.823163 | orchestrator | Saturday 01 November 2025 13:21:16 +0000 (0:00:00.607) 0:00:01.793 ***** 2025-11-01 13:23:43.823177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 13:23:43.823216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 13:23:43.823240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 13:23:43.823252 | orchestrator | 2025-11-01 13:23:43.823264 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-11-01 13:23:43.823275 | orchestrator | Saturday 01 November 2025 13:21:17 +0000 (0:00:00.782) 0:00:02.576 ***** 2025-11-01 13:23:43.823286 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-11-01 13:23:43.823298 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-11-01 13:23:43.823309 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 13:23:43.823320 | orchestrator | 2025-11-01 13:23:43.823331 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-11-01 13:23:43.823398 | orchestrator | Saturday 01 November 2025 13:21:18 +0000 (0:00:00.958) 0:00:03.534 ***** 2025-11-01 13:23:43.823418 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:23:43.823435 | orchestrator | 2025-11-01 13:23:43.823446 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-11-01 13:23:43.823457 | orchestrator | Saturday 01 November 2025 13:21:18 +0000 (0:00:00.776) 0:00:04.311 ***** 2025-11-01 13:23:43.824015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 13:23:43.824050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 13:23:43.824075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 13:23:43.824086 | orchestrator | 2025-11-01 13:23:43.824098 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-11-01 13:23:43.824108 | orchestrator | Saturday 01 November 2025 13:21:20 +0000 (0:00:01.428) 0:00:05.740 ***** 2025-11-01 13:23:43.824156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-01 13:23:43.824179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-01 13:23:43.824191 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:23:43.824202 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:23:43.824224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-01 13:23:43.824236 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:23:43.824247 | orchestrator | 2025-11-01 13:23:43.824258 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-11-01 13:23:43.824269 | orchestrator | Saturday 01 November 2025 13:21:20 +0000 (0:00:00.443) 0:00:06.183 ***** 2025-11-01 13:23:43.824280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-01 13:23:43.824704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-01 13:23:43.824723 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:23:43.824735 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:23:43.824747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-11-01 13:23:43.824758 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:23:43.824769 | orchestrator | 2025-11-01 13:23:43.824780 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-11-01 13:23:43.824794 | orchestrator | Saturday 01 November 2025 13:21:21 +0000 (0:00:00.906) 0:00:07.090 ***** 2025-11-01 13:23:43.824824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 13:23:43.824845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 13:23:43.824917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 13:23:43.824954 | orchestrator | 2025-11-01 13:23:43.824974 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-11-01 13:23:43.824992 | orchestrator | Saturday 01 November 2025 13:21:22 +0000 (0:00:01.238) 0:00:08.328 ***** 2025-11-01 13:23:43.825011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 13:23:43.825032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 13:23:43.825050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 13:23:43.825070 | orchestrator | 2025-11-01 13:23:43.825088 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-11-01 13:23:43.825107 | orchestrator | Saturday 01 November 2025 13:21:24 +0000 (0:00:01.426) 0:00:09.754 ***** 2025-11-01 13:23:43.825119 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:23:43.825130 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:23:43.825141 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:23:43.825152 | orchestrator | 2025-11-01 13:23:43.825163 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-11-01 13:23:43.825181 | orchestrator | Saturday 01 November 2025 13:21:24 +0000 (0:00:00.552) 0:00:10.306 ***** 2025-11-01 13:23:43.825192 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-11-01 13:23:43.825203 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-11-01 13:23:43.825214 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-11-01 13:23:43.825225 | orchestrator | 2025-11-01 13:23:43.825235 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-11-01 13:23:43.825246 | orchestrator | Saturday 01 November 2025 13:21:26 +0000 (0:00:01.315) 0:00:11.622 ***** 2025-11-01 13:23:43.825257 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-11-01 13:23:43.825268 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-11-01 13:23:43.825278 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-11-01 13:23:43.825298 | orchestrator | 2025-11-01 13:23:43.825310 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-11-01 13:23:43.825323 | orchestrator | Saturday 01 November 2025 13:21:27 +0000 (0:00:01.295) 0:00:12.917 ***** 2025-11-01 13:23:43.825422 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-11-01 13:23:43.825439 | orchestrator | 2025-11-01 13:23:43.825451 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-11-01 13:23:43.825463 | orchestrator | Saturday 01 November 2025 13:21:28 +0000 (0:00:00.862) 0:00:13.780 ***** 2025-11-01 13:23:43.825474 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-11-01 13:23:43.825487 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-11-01 13:23:43.825500 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:23:43.825512 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:23:43.825524 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:23:43.825536 | orchestrator | 2025-11-01 13:23:43.825548 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-11-01 13:23:43.825560 | orchestrator | Saturday 01 November 2025 13:21:29 +0000 (0:00:00.757) 0:00:14.537 ***** 2025-11-01 13:23:43.825572 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:23:43.825584 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:23:43.825595 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:23:43.825608 | orchestrator | 2025-11-01 13:23:43.825620 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-11-01 13:23:43.825632 | orchestrator | Saturday 01 November 2025 13:21:30 +0000 (0:00:00.970) 0:00:15.508 ***** 2025-11-01 13:23:43.825646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1098018, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.771486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.825660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1098018, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.771486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.825672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1098018, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.771486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.825689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1098081, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.784249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.825739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1098081, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.784249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.825752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1098081, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.784249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.825764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1098038, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7758045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.825775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1098038, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7758045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.825786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1098038, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7758045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.825802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1098086, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7865534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.825820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1098086, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7865534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.825861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1098086, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7865534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.825874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1098057, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7794347, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.825885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1098057, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7794347, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.825897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1098057, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7794347, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.825908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1098072, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.825935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1098072, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.825975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1098072, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.825989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1098013, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7707589, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1098013, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7707589, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1098013, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7707589, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1098024, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.773793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1098024, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.773793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1098024, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.773793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1098042, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.77581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1098042, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.77581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1098042, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.77581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1098063, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7808604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1098063, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7808604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1098063, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7808604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1098079, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7837536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1098079, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7837536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1098079, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7837536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1098031, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7749486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1098031, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7749486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1098031, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7749486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1098069, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7822227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1098069, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7822227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1098069, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7822227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1098058, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7803771, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1098058, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7803771, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1098058, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7803771, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1098051, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.779032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1098051, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.779032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1098051, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.779032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1098047, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7782645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1098047, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7782645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1098047, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7782645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1098066, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7813752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1098066, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7813752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1098066, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7813752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1098045, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.777239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1098045, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.777239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1098045, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.777239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1098075, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7837536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1098075, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7837536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1098075, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7837536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1098259, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8350332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1098259, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8350332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1098259, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8350332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.826999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1098135, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7981527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1098135, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7981527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1098135, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7981527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1098116, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.790425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1098116, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.790425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1098167, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8007715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1098116, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.790425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1098167, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8007715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1098103, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7876298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1098167, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8007715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1098103, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7876298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1098103, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7876298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1098236, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8268108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1098236, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8268108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1098170, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8241346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1098236, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8268108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1098170, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8241346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1098238, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8281229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1098170, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8241346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1098238, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8281229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1098256, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8331091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1098238, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8281229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1098256, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8331091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1098235, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8261106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1098256, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8331091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1098235, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8261106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1098158, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.799544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1098235, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8261106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1098158, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.799544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1098129, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7938263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1098129, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7938263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1098158, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.799544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1098155, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.798406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1098155, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.798406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1098129, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7938263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1098123, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7918394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1098123, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7918394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1098155, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.798406, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1098163, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8003457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1098163, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8003457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1098123, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7918394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1098248, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.832669, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1098248, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.832669, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1098163, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8003457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1098243, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8302693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1098248, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.832669, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1098243, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8302693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1098110, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7887068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1098110, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7887068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1098243, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8302693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1098113, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7892835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1098113, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7892835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1098110, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7887068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1098221, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8261106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1098221, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8261106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1098113, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.7892835, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1098242, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.828527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1098242, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.828527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.827996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1098221, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.8261106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.828008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1098242, 'dev': 120, 'nlink': 1, 'atime': 1761955328.0, 'mtime': 1761955328.0, 'ctime': 1761999888.828527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-11-01 13:23:43.828019 | orchestrator | 2025-11-01 13:23:43.828031 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-11-01 13:23:43.828043 | orchestrator | Saturday 01 November 2025 13:22:13 +0000 (0:00:43.370) 0:00:58.879 ***** 2025-11-01 13:23:43.828054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 13:23:43.828065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 13:23:43.828082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-11-01 13:23:43.828094 | orchestrator | 2025-11-01 13:23:43.828105 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-11-01 13:23:43.828116 | orchestrator | Saturday 01 November 2025 13:22:14 +0000 (0:00:01.147) 0:01:00.026 ***** 2025-11-01 13:23:43.828127 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:23:43.828145 | orchestrator | 2025-11-01 13:23:43.828156 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-11-01 13:23:43.828167 | orchestrator | Saturday 01 November 2025 13:22:17 +0000 (0:00:02.513) 0:01:02.540 ***** 2025-11-01 13:23:43.828177 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:23:43.828188 | orchestrator | 2025-11-01 13:23:43.828202 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-11-01 13:23:43.828214 | orchestrator | Saturday 01 November 2025 13:22:19 +0000 (0:00:02.573) 0:01:05.114 ***** 2025-11-01 13:23:43.828224 | orchestrator | 2025-11-01 13:23:43.828235 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-11-01 13:23:43.828251 | orchestrator | Saturday 01 November 2025 13:22:19 +0000 (0:00:00.070) 0:01:05.184 ***** 2025-11-01 13:23:43.828262 | orchestrator | 2025-11-01 13:23:43.828273 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-11-01 13:23:43.828284 | orchestrator | Saturday 01 November 2025 13:22:19 +0000 (0:00:00.062) 0:01:05.247 ***** 2025-11-01 13:23:43.828295 | orchestrator | 2025-11-01 13:23:43.828305 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-11-01 13:23:43.828316 | orchestrator | Saturday 01 November 2025 13:22:20 +0000 (0:00:00.280) 0:01:05.527 ***** 2025-11-01 13:23:43.828327 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:23:43.828359 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:23:43.828370 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:23:43.828381 | orchestrator | 2025-11-01 13:23:43.828391 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-11-01 13:23:43.828402 | orchestrator | Saturday 01 November 2025 13:22:27 +0000 (0:00:07.255) 0:01:12.783 ***** 2025-11-01 13:23:43.828413 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:23:43.828424 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:23:43.828435 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-11-01 13:23:43.828446 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-11-01 13:23:43.828457 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-11-01 13:23:43.828467 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:23:43.828478 | orchestrator | 2025-11-01 13:23:43.828489 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-11-01 13:23:43.828500 | orchestrator | Saturday 01 November 2025 13:23:07 +0000 (0:00:39.738) 0:01:52.522 ***** 2025-11-01 13:23:43.828511 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:23:43.828521 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:23:43.828532 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:23:43.828543 | orchestrator | 2025-11-01 13:23:43.828553 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-11-01 13:23:43.828564 | orchestrator | Saturday 01 November 2025 13:23:36 +0000 (0:00:29.392) 0:02:21.915 ***** 2025-11-01 13:23:43.828575 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:23:43.828586 | orchestrator | 2025-11-01 13:23:43.828597 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-11-01 13:23:43.828607 | orchestrator | Saturday 01 November 2025 13:23:39 +0000 (0:00:02.520) 0:02:24.435 ***** 2025-11-01 13:23:43.828618 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:23:43.828629 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:23:43.828639 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:23:43.828650 | orchestrator | 2025-11-01 13:23:43.828661 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-11-01 13:23:43.828672 | orchestrator | Saturday 01 November 2025 13:23:39 +0000 (0:00:00.536) 0:02:24.971 ***** 2025-11-01 13:23:43.828683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-11-01 13:23:43.828703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-11-01 13:23:43.828715 | orchestrator | 2025-11-01 13:23:43.828726 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-11-01 13:23:43.828737 | orchestrator | Saturday 01 November 2025 13:23:42 +0000 (0:00:02.806) 0:02:27.778 ***** 2025-11-01 13:23:43.828747 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:23:43.828758 | orchestrator | 2025-11-01 13:23:43.828768 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:23:43.828785 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-01 13:23:43.828797 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-01 13:23:43.828808 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-01 13:23:43.828819 | orchestrator | 2025-11-01 13:23:43.828830 | orchestrator | 2025-11-01 13:23:43.828841 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:23:43.828851 | orchestrator | Saturday 01 November 2025 13:23:42 +0000 (0:00:00.295) 0:02:28.074 ***** 2025-11-01 13:23:43.828862 | orchestrator | =============================================================================== 2025-11-01 13:23:43.828873 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 43.37s 2025-11-01 13:23:43.828883 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 39.74s 2025-11-01 13:23:43.828894 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 29.39s 2025-11-01 13:23:43.828905 | orchestrator | grafana : Restart first grafana container ------------------------------- 7.26s 2025-11-01 13:23:43.828915 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.81s 2025-11-01 13:23:43.828931 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.57s 2025-11-01 13:23:43.828942 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.52s 2025-11-01 13:23:43.828953 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.51s 2025-11-01 13:23:43.828964 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.43s 2025-11-01 13:23:43.828974 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.43s 2025-11-01 13:23:43.828985 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.32s 2025-11-01 13:23:43.828996 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.30s 2025-11-01 13:23:43.829006 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.24s 2025-11-01 13:23:43.829017 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.15s 2025-11-01 13:23:43.829028 | orchestrator | grafana : Prune templated Grafana dashboards ---------------------------- 0.97s 2025-11-01 13:23:43.829038 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.96s 2025-11-01 13:23:43.829049 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.91s 2025-11-01 13:23:43.829060 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.86s 2025-11-01 13:23:43.829071 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.78s 2025-11-01 13:23:43.829081 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.78s 2025-11-01 13:23:43.829099 | orchestrator | 2025-11-01 13:23:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:23:46.862684 | orchestrator | 2025-11-01 13:23:46 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:23:46.865365 | orchestrator | 2025-11-01 13:23:46 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:23:46.865400 | orchestrator | 2025-11-01 13:23:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:23:49.907300 | orchestrator | 2025-11-01 13:23:49 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:23:49.908246 | orchestrator | 2025-11-01 13:23:49 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:23:49.908264 | orchestrator | 2025-11-01 13:23:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:23:52.948089 | orchestrator | 2025-11-01 13:23:52 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:23:52.949162 | orchestrator | 2025-11-01 13:23:52 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:23:52.949197 | orchestrator | 2025-11-01 13:23:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:23:55.982933 | orchestrator | 2025-11-01 13:23:55 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:23:55.983875 | orchestrator | 2025-11-01 13:23:55 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:23:55.983912 | orchestrator | 2025-11-01 13:23:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:23:59.031779 | orchestrator | 2025-11-01 13:23:59 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:23:59.034113 | orchestrator | 2025-11-01 13:23:59 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:23:59.034201 | orchestrator | 2025-11-01 13:23:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:24:02.078384 | orchestrator | 2025-11-01 13:24:02 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:24:02.079823 | orchestrator | 2025-11-01 13:24:02 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:24:02.079851 | orchestrator | 2025-11-01 13:24:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:24:05.124962 | orchestrator | 2025-11-01 13:24:05 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:24:05.126162 | orchestrator | 2025-11-01 13:24:05 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:24:05.126191 | orchestrator | 2025-11-01 13:24:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:24:08.165038 | orchestrator | 2025-11-01 13:24:08 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:24:08.166692 | orchestrator | 2025-11-01 13:24:08 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:24:08.166720 | orchestrator | 2025-11-01 13:24:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:24:11.211042 | orchestrator | 2025-11-01 13:24:11 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:24:11.212517 | orchestrator | 2025-11-01 13:24:11 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:24:11.212551 | orchestrator | 2025-11-01 13:24:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:24:14.255227 | orchestrator | 2025-11-01 13:24:14 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:24:14.256181 | orchestrator | 2025-11-01 13:24:14 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:24:14.256239 | orchestrator | 2025-11-01 13:24:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:24:17.293963 | orchestrator | 2025-11-01 13:24:17 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:24:17.295929 | orchestrator | 2025-11-01 13:24:17 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:24:17.296023 | orchestrator | 2025-11-01 13:24:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:24:20.339187 | orchestrator | 2025-11-01 13:24:20 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:24:20.341366 | orchestrator | 2025-11-01 13:24:20 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:24:20.341397 | orchestrator | 2025-11-01 13:24:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:24:23.382136 | orchestrator | 2025-11-01 13:24:23 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:24:23.383381 | orchestrator | 2025-11-01 13:24:23 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:24:23.383410 | orchestrator | 2025-11-01 13:24:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:24:26.423934 | orchestrator | 2025-11-01 13:24:26 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:24:26.431786 | orchestrator | 2025-11-01 13:24:26 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:24:26.431825 | orchestrator | 2025-11-01 13:24:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:24:29.474661 | orchestrator | 2025-11-01 13:24:29 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:24:29.476335 | orchestrator | 2025-11-01 13:24:29 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:24:29.476489 | orchestrator | 2025-11-01 13:24:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:24:32.513255 | orchestrator | 2025-11-01 13:24:32 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:24:32.514068 | orchestrator | 2025-11-01 13:24:32 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:24:32.514099 | orchestrator | 2025-11-01 13:24:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:24:35.553025 | orchestrator | 2025-11-01 13:24:35 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:24:35.554160 | orchestrator | 2025-11-01 13:24:35 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:24:35.554201 | orchestrator | 2025-11-01 13:24:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:24:38.599293 | orchestrator | 2025-11-01 13:24:38 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:24:38.600022 | orchestrator | 2025-11-01 13:24:38 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:24:38.600053 | orchestrator | 2025-11-01 13:24:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:24:41.650699 | orchestrator | 2025-11-01 13:24:41 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:24:41.650772 | orchestrator | 2025-11-01 13:24:41 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:24:41.650785 | orchestrator | 2025-11-01 13:24:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:24:44.687632 | orchestrator | 2025-11-01 13:24:44 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:24:44.689673 | orchestrator | 2025-11-01 13:24:44 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:24:44.689870 | orchestrator | 2025-11-01 13:24:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:24:47.739178 | orchestrator | 2025-11-01 13:24:47 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:24:47.740123 | orchestrator | 2025-11-01 13:24:47 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:24:47.740162 | orchestrator | 2025-11-01 13:24:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:24:50.778772 | orchestrator | 2025-11-01 13:24:50 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:24:50.778863 | orchestrator | 2025-11-01 13:24:50 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:24:50.778876 | orchestrator | 2025-11-01 13:24:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:24:53.822835 | orchestrator | 2025-11-01 13:24:53 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:24:53.824809 | orchestrator | 2025-11-01 13:24:53 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:24:53.824836 | orchestrator | 2025-11-01 13:24:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:24:56.868787 | orchestrator | 2025-11-01 13:24:56 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:24:56.870709 | orchestrator | 2025-11-01 13:24:56 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:24:56.870739 | orchestrator | 2025-11-01 13:24:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:24:59.915205 | orchestrator | 2025-11-01 13:24:59 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:24:59.916587 | orchestrator | 2025-11-01 13:24:59 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:24:59.916614 | orchestrator | 2025-11-01 13:24:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:25:02.961136 | orchestrator | 2025-11-01 13:25:02 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:25:02.963114 | orchestrator | 2025-11-01 13:25:02 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:25:02.963146 | orchestrator | 2025-11-01 13:25:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:25:05.997017 | orchestrator | 2025-11-01 13:25:05 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:25:06.001185 | orchestrator | 2025-11-01 13:25:05 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:25:06.001247 | orchestrator | 2025-11-01 13:25:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:25:09.050306 | orchestrator | 2025-11-01 13:25:09 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:25:09.051223 | orchestrator | 2025-11-01 13:25:09 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:25:09.051251 | orchestrator | 2025-11-01 13:25:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:25:12.089605 | orchestrator | 2025-11-01 13:25:12 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:25:12.092603 | orchestrator | 2025-11-01 13:25:12 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:25:12.093466 | orchestrator | 2025-11-01 13:25:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:25:15.133274 | orchestrator | 2025-11-01 13:25:15 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:25:15.134781 | orchestrator | 2025-11-01 13:25:15 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:25:15.134848 | orchestrator | 2025-11-01 13:25:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:25:18.181578 | orchestrator | 2025-11-01 13:25:18 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:25:18.183961 | orchestrator | 2025-11-01 13:25:18 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:25:18.183983 | orchestrator | 2025-11-01 13:25:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:25:21.226707 | orchestrator | 2025-11-01 13:25:21 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:25:21.227977 | orchestrator | 2025-11-01 13:25:21 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:25:21.228011 | orchestrator | 2025-11-01 13:25:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:25:24.268723 | orchestrator | 2025-11-01 13:25:24 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:25:24.270368 | orchestrator | 2025-11-01 13:25:24 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:25:24.271523 | orchestrator | 2025-11-01 13:25:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:25:27.302598 | orchestrator | 2025-11-01 13:25:27 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:25:27.304304 | orchestrator | 2025-11-01 13:25:27 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:25:27.304459 | orchestrator | 2025-11-01 13:25:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:25:30.353377 | orchestrator | 2025-11-01 13:25:30 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:25:30.354491 | orchestrator | 2025-11-01 13:25:30 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:25:30.354529 | orchestrator | 2025-11-01 13:25:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:25:33.398397 | orchestrator | 2025-11-01 13:25:33 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:25:33.399736 | orchestrator | 2025-11-01 13:25:33 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:25:33.399759 | orchestrator | 2025-11-01 13:25:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:25:36.455604 | orchestrator | 2025-11-01 13:25:36 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:25:36.456802 | orchestrator | 2025-11-01 13:25:36 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:25:36.456932 | orchestrator | 2025-11-01 13:25:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:25:39.498473 | orchestrator | 2025-11-01 13:25:39 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:25:39.499455 | orchestrator | 2025-11-01 13:25:39 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:25:39.499488 | orchestrator | 2025-11-01 13:25:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:25:42.545421 | orchestrator | 2025-11-01 13:25:42 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:25:42.547095 | orchestrator | 2025-11-01 13:25:42 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:25:42.547168 | orchestrator | 2025-11-01 13:25:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:25:45.589696 | orchestrator | 2025-11-01 13:25:45 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:25:45.590657 | orchestrator | 2025-11-01 13:25:45 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:25:45.590687 | orchestrator | 2025-11-01 13:25:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:25:48.640281 | orchestrator | 2025-11-01 13:25:48 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:25:48.642159 | orchestrator | 2025-11-01 13:25:48 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:25:48.642208 | orchestrator | 2025-11-01 13:25:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:25:51.683025 | orchestrator | 2025-11-01 13:25:51 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:25:51.684332 | orchestrator | 2025-11-01 13:25:51 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:25:51.684494 | orchestrator | 2025-11-01 13:25:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:25:54.726873 | orchestrator | 2025-11-01 13:25:54 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:25:54.727824 | orchestrator | 2025-11-01 13:25:54 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:25:54.728230 | orchestrator | 2025-11-01 13:25:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:25:57.771509 | orchestrator | 2025-11-01 13:25:57 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:25:57.773098 | orchestrator | 2025-11-01 13:25:57 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:25:57.773456 | orchestrator | 2025-11-01 13:25:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:26:00.817077 | orchestrator | 2025-11-01 13:26:00 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:26:00.819034 | orchestrator | 2025-11-01 13:26:00 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:26:00.819075 | orchestrator | 2025-11-01 13:26:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:26:03.868461 | orchestrator | 2025-11-01 13:26:03 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:26:03.871709 | orchestrator | 2025-11-01 13:26:03 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:26:03.872461 | orchestrator | 2025-11-01 13:26:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:26:06.906441 | orchestrator | 2025-11-01 13:26:06 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:26:06.910138 | orchestrator | 2025-11-01 13:26:06 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:26:06.910178 | orchestrator | 2025-11-01 13:26:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:26:09.955895 | orchestrator | 2025-11-01 13:26:09 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:26:09.957172 | orchestrator | 2025-11-01 13:26:09 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:26:09.957207 | orchestrator | 2025-11-01 13:26:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:26:13.007696 | orchestrator | 2025-11-01 13:26:13 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:26:13.011279 | orchestrator | 2025-11-01 13:26:13 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:26:13.011875 | orchestrator | 2025-11-01 13:26:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:26:16.054527 | orchestrator | 2025-11-01 13:26:16 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:26:16.055647 | orchestrator | 2025-11-01 13:26:16 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:26:16.055741 | orchestrator | 2025-11-01 13:26:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:26:19.099294 | orchestrator | 2025-11-01 13:26:19 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:26:19.100550 | orchestrator | 2025-11-01 13:26:19 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:26:19.100568 | orchestrator | 2025-11-01 13:26:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:26:22.143712 | orchestrator | 2025-11-01 13:26:22 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:26:22.146468 | orchestrator | 2025-11-01 13:26:22 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:26:22.146493 | orchestrator | 2025-11-01 13:26:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:26:25.192284 | orchestrator | 2025-11-01 13:26:25 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:26:25.194118 | orchestrator | 2025-11-01 13:26:25 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:26:25.194604 | orchestrator | 2025-11-01 13:26:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:26:28.241706 | orchestrator | 2025-11-01 13:26:28 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:26:28.245544 | orchestrator | 2025-11-01 13:26:28 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:26:28.245572 | orchestrator | 2025-11-01 13:26:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:26:31.280065 | orchestrator | 2025-11-01 13:26:31 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:26:31.281846 | orchestrator | 2025-11-01 13:26:31 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:26:31.281887 | orchestrator | 2025-11-01 13:26:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:26:34.323603 | orchestrator | 2025-11-01 13:26:34 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:26:34.325142 | orchestrator | 2025-11-01 13:26:34 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:26:34.325177 | orchestrator | 2025-11-01 13:26:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:26:37.365077 | orchestrator | 2025-11-01 13:26:37 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:26:37.366768 | orchestrator | 2025-11-01 13:26:37 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:26:37.366801 | orchestrator | 2025-11-01 13:26:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:26:40.422506 | orchestrator | 2025-11-01 13:26:40 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:26:40.422560 | orchestrator | 2025-11-01 13:26:40 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:26:40.422567 | orchestrator | 2025-11-01 13:26:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:26:43.473740 | orchestrator | 2025-11-01 13:26:43 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:26:43.475569 | orchestrator | 2025-11-01 13:26:43 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:26:43.475601 | orchestrator | 2025-11-01 13:26:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:26:46.516657 | orchestrator | 2025-11-01 13:26:46 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:26:46.519148 | orchestrator | 2025-11-01 13:26:46 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:26:46.519490 | orchestrator | 2025-11-01 13:26:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:26:49.560502 | orchestrator | 2025-11-01 13:26:49 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:26:49.561539 | orchestrator | 2025-11-01 13:26:49 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:26:49.561575 | orchestrator | 2025-11-01 13:26:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:26:52.599448 | orchestrator | 2025-11-01 13:26:52 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:26:52.599926 | orchestrator | 2025-11-01 13:26:52 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:26:52.599954 | orchestrator | 2025-11-01 13:26:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:26:55.638320 | orchestrator | 2025-11-01 13:26:55 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:26:55.638418 | orchestrator | 2025-11-01 13:26:55 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:26:55.638433 | orchestrator | 2025-11-01 13:26:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:26:58.679989 | orchestrator | 2025-11-01 13:26:58 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:26:58.680730 | orchestrator | 2025-11-01 13:26:58 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:26:58.680851 | orchestrator | 2025-11-01 13:26:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:27:01.727747 | orchestrator | 2025-11-01 13:27:01 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:27:01.728704 | orchestrator | 2025-11-01 13:27:01 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:27:01.728792 | orchestrator | 2025-11-01 13:27:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:27:04.766396 | orchestrator | 2025-11-01 13:27:04 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:27:04.767326 | orchestrator | 2025-11-01 13:27:04 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:27:04.767399 | orchestrator | 2025-11-01 13:27:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:27:07.810857 | orchestrator | 2025-11-01 13:27:07 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:27:07.812283 | orchestrator | 2025-11-01 13:27:07 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:27:07.812970 | orchestrator | 2025-11-01 13:27:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:27:10.863967 | orchestrator | 2025-11-01 13:27:10 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:27:10.865453 | orchestrator | 2025-11-01 13:27:10 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:27:10.865515 | orchestrator | 2025-11-01 13:27:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:27:13.918140 | orchestrator | 2025-11-01 13:27:13 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:27:13.918415 | orchestrator | 2025-11-01 13:27:13 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:27:13.918439 | orchestrator | 2025-11-01 13:27:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:27:16.966591 | orchestrator | 2025-11-01 13:27:16 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:27:16.968583 | orchestrator | 2025-11-01 13:27:16 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:27:16.968610 | orchestrator | 2025-11-01 13:27:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:27:20.014415 | orchestrator | 2025-11-01 13:27:20 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:27:20.017529 | orchestrator | 2025-11-01 13:27:20 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:27:20.017568 | orchestrator | 2025-11-01 13:27:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:27:23.068301 | orchestrator | 2025-11-01 13:27:23 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:27:23.069610 | orchestrator | 2025-11-01 13:27:23 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:27:23.069854 | orchestrator | 2025-11-01 13:27:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:27:26.112137 | orchestrator | 2025-11-01 13:27:26 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:27:26.113023 | orchestrator | 2025-11-01 13:27:26 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:27:26.113059 | orchestrator | 2025-11-01 13:27:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:27:29.153613 | orchestrator | 2025-11-01 13:27:29 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:27:29.155733 | orchestrator | 2025-11-01 13:27:29 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state STARTED 2025-11-01 13:27:29.155822 | orchestrator | 2025-11-01 13:27:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:27:32.195647 | orchestrator | 2025-11-01 13:27:32 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:27:32.198690 | orchestrator | 2025-11-01 13:27:32 | INFO  | Task 403ca631-8296-439d-b4ad-9fea0d52f346 is in state SUCCESS 2025-11-01 13:27:32.201038 | orchestrator | 2025-11-01 13:27:32.201072 | orchestrator | 2025-11-01 13:27:32.201084 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:27:32.201096 | orchestrator | 2025-11-01 13:27:32.201108 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:27:32.201119 | orchestrator | Saturday 01 November 2025 13:22:22 +0000 (0:00:00.304) 0:00:00.304 ***** 2025-11-01 13:27:32.201131 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:27:32.201143 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:27:32.201154 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:27:32.201165 | orchestrator | 2025-11-01 13:27:32.201176 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:27:32.201187 | orchestrator | Saturday 01 November 2025 13:22:22 +0000 (0:00:00.313) 0:00:00.618 ***** 2025-11-01 13:27:32.201198 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-11-01 13:27:32.201210 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-11-01 13:27:32.201221 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-11-01 13:27:32.201259 | orchestrator | 2025-11-01 13:27:32.201270 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-11-01 13:27:32.201281 | orchestrator | 2025-11-01 13:27:32.201292 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-01 13:27:32.201303 | orchestrator | Saturday 01 November 2025 13:22:22 +0000 (0:00:00.477) 0:00:01.095 ***** 2025-11-01 13:27:32.201314 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:27:32.201326 | orchestrator | 2025-11-01 13:27:32.201337 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-11-01 13:27:32.201377 | orchestrator | Saturday 01 November 2025 13:22:23 +0000 (0:00:00.613) 0:00:01.709 ***** 2025-11-01 13:27:32.201388 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-11-01 13:27:32.201399 | orchestrator | 2025-11-01 13:27:32.201422 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-11-01 13:27:32.201434 | orchestrator | Saturday 01 November 2025 13:22:27 +0000 (0:00:03.675) 0:00:05.384 ***** 2025-11-01 13:27:32.201444 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-11-01 13:27:32.201456 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-11-01 13:27:32.201467 | orchestrator | 2025-11-01 13:27:32.201478 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-11-01 13:27:32.201488 | orchestrator | Saturday 01 November 2025 13:22:34 +0000 (0:00:07.459) 0:00:12.844 ***** 2025-11-01 13:27:32.201500 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-11-01 13:27:32.201511 | orchestrator | 2025-11-01 13:27:32.201522 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-11-01 13:27:32.202005 | orchestrator | Saturday 01 November 2025 13:22:38 +0000 (0:00:03.678) 0:00:16.522 ***** 2025-11-01 13:27:32.202067 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-11-01 13:27:32.202079 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-11-01 13:27:32.202090 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-11-01 13:27:32.202101 | orchestrator | 2025-11-01 13:27:32.202112 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-11-01 13:27:32.202123 | orchestrator | Saturday 01 November 2025 13:22:47 +0000 (0:00:09.032) 0:00:25.555 ***** 2025-11-01 13:27:32.202134 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-11-01 13:27:32.202145 | orchestrator | 2025-11-01 13:27:32.202156 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-11-01 13:27:32.202167 | orchestrator | Saturday 01 November 2025 13:22:51 +0000 (0:00:03.958) 0:00:29.513 ***** 2025-11-01 13:27:32.202178 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-11-01 13:27:32.202189 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-11-01 13:27:32.202200 | orchestrator | 2025-11-01 13:27:32.202211 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-11-01 13:27:32.202222 | orchestrator | Saturday 01 November 2025 13:22:59 +0000 (0:00:08.469) 0:00:37.983 ***** 2025-11-01 13:27:32.202233 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-11-01 13:27:32.202244 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-11-01 13:27:32.202254 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-11-01 13:27:32.202265 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-11-01 13:27:32.202276 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-11-01 13:27:32.202287 | orchestrator | 2025-11-01 13:27:32.202298 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-01 13:27:32.202309 | orchestrator | Saturday 01 November 2025 13:23:17 +0000 (0:00:17.783) 0:00:55.767 ***** 2025-11-01 13:27:32.202333 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:27:32.202367 | orchestrator | 2025-11-01 13:27:32.202378 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-11-01 13:27:32.202389 | orchestrator | Saturday 01 November 2025 13:23:18 +0000 (0:00:00.610) 0:00:56.377 ***** 2025-11-01 13:27:32.202400 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.202411 | orchestrator | 2025-11-01 13:27:32.202422 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-11-01 13:27:32.202433 | orchestrator | Saturday 01 November 2025 13:23:23 +0000 (0:00:05.248) 0:01:01.625 ***** 2025-11-01 13:27:32.202444 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.202455 | orchestrator | 2025-11-01 13:27:32.202466 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-11-01 13:27:32.202490 | orchestrator | Saturday 01 November 2025 13:23:28 +0000 (0:00:05.224) 0:01:06.850 ***** 2025-11-01 13:27:32.202501 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:27:32.202512 | orchestrator | 2025-11-01 13:27:32.202917 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-11-01 13:27:32.202932 | orchestrator | Saturday 01 November 2025 13:23:32 +0000 (0:00:03.685) 0:01:10.535 ***** 2025-11-01 13:27:32.202943 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-11-01 13:27:32.202954 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-11-01 13:27:32.202965 | orchestrator | 2025-11-01 13:27:32.202976 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-11-01 13:27:32.202987 | orchestrator | Saturday 01 November 2025 13:23:44 +0000 (0:00:11.694) 0:01:22.230 ***** 2025-11-01 13:27:32.202997 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-11-01 13:27:32.203009 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-11-01 13:27:32.203021 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-11-01 13:27:32.203033 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-11-01 13:27:32.203044 | orchestrator | 2025-11-01 13:27:32.203055 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-11-01 13:27:32.203066 | orchestrator | Saturday 01 November 2025 13:24:03 +0000 (0:00:19.121) 0:01:41.351 ***** 2025-11-01 13:27:32.203076 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.203087 | orchestrator | 2025-11-01 13:27:32.203107 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-11-01 13:27:32.203118 | orchestrator | Saturday 01 November 2025 13:24:08 +0000 (0:00:05.596) 0:01:46.948 ***** 2025-11-01 13:27:32.203129 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.203140 | orchestrator | 2025-11-01 13:27:32.203151 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-11-01 13:27:32.203161 | orchestrator | Saturday 01 November 2025 13:24:14 +0000 (0:00:05.866) 0:01:52.814 ***** 2025-11-01 13:27:32.203172 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:27:32.203183 | orchestrator | 2025-11-01 13:27:32.203193 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-11-01 13:27:32.203204 | orchestrator | Saturday 01 November 2025 13:24:14 +0000 (0:00:00.224) 0:01:53.039 ***** 2025-11-01 13:27:32.203215 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:27:32.203225 | orchestrator | 2025-11-01 13:27:32.203236 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-01 13:27:32.203247 | orchestrator | Saturday 01 November 2025 13:24:19 +0000 (0:00:04.833) 0:01:57.872 ***** 2025-11-01 13:27:32.203257 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:27:32.203279 | orchestrator | 2025-11-01 13:27:32.203290 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-11-01 13:27:32.203301 | orchestrator | Saturday 01 November 2025 13:24:20 +0000 (0:00:01.168) 0:01:59.040 ***** 2025-11-01 13:27:32.203311 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:27:32.203322 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.203333 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:27:32.203368 | orchestrator | 2025-11-01 13:27:32.203380 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-11-01 13:27:32.203391 | orchestrator | Saturday 01 November 2025 13:24:26 +0000 (0:00:05.474) 0:02:04.514 ***** 2025-11-01 13:27:32.203401 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:27:32.203412 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.203423 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:27:32.203433 | orchestrator | 2025-11-01 13:27:32.203444 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-11-01 13:27:32.203455 | orchestrator | Saturday 01 November 2025 13:24:31 +0000 (0:00:04.905) 0:02:09.419 ***** 2025-11-01 13:27:32.203465 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.203476 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:27:32.203487 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:27:32.203497 | orchestrator | 2025-11-01 13:27:32.203508 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-11-01 13:27:32.203518 | orchestrator | Saturday 01 November 2025 13:24:32 +0000 (0:00:00.826) 0:02:10.246 ***** 2025-11-01 13:27:32.203529 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:27:32.203542 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:27:32.203554 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:27:32.203566 | orchestrator | 2025-11-01 13:27:32.203578 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-11-01 13:27:32.203590 | orchestrator | Saturday 01 November 2025 13:24:34 +0000 (0:00:02.188) 0:02:12.435 ***** 2025-11-01 13:27:32.203602 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:27:32.203614 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:27:32.203625 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.203638 | orchestrator | 2025-11-01 13:27:32.203650 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-11-01 13:27:32.203662 | orchestrator | Saturday 01 November 2025 13:24:35 +0000 (0:00:01.379) 0:02:13.814 ***** 2025-11-01 13:27:32.203674 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.203686 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:27:32.203698 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:27:32.203710 | orchestrator | 2025-11-01 13:27:32.203722 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-11-01 13:27:32.203734 | orchestrator | Saturday 01 November 2025 13:24:36 +0000 (0:00:01.263) 0:02:15.077 ***** 2025-11-01 13:27:32.203747 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.203758 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:27:32.203770 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:27:32.203783 | orchestrator | 2025-11-01 13:27:32.203832 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-11-01 13:27:32.203847 | orchestrator | Saturday 01 November 2025 13:24:38 +0000 (0:00:02.059) 0:02:17.137 ***** 2025-11-01 13:27:32.203859 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.203871 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:27:32.203884 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:27:32.203896 | orchestrator | 2025-11-01 13:27:32.203908 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-11-01 13:27:32.203921 | orchestrator | Saturday 01 November 2025 13:24:40 +0000 (0:00:01.840) 0:02:18.978 ***** 2025-11-01 13:27:32.203932 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:27:32.203943 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:27:32.203953 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:27:32.203964 | orchestrator | 2025-11-01 13:27:32.203982 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-11-01 13:27:32.203993 | orchestrator | Saturday 01 November 2025 13:24:41 +0000 (0:00:00.681) 0:02:19.659 ***** 2025-11-01 13:27:32.204003 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:27:32.204014 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:27:32.204025 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:27:32.204035 | orchestrator | 2025-11-01 13:27:32.204046 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-01 13:27:32.204057 | orchestrator | Saturday 01 November 2025 13:24:44 +0000 (0:00:02.995) 0:02:22.654 ***** 2025-11-01 13:27:32.204067 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:27:32.204078 | orchestrator | 2025-11-01 13:27:32.204089 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-11-01 13:27:32.204099 | orchestrator | Saturday 01 November 2025 13:24:45 +0000 (0:00:00.603) 0:02:23.258 ***** 2025-11-01 13:27:32.204110 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:27:32.204121 | orchestrator | 2025-11-01 13:27:32.204136 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-11-01 13:27:32.204148 | orchestrator | Saturday 01 November 2025 13:24:49 +0000 (0:00:04.422) 0:02:27.680 ***** 2025-11-01 13:27:32.204158 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:27:32.204169 | orchestrator | 2025-11-01 13:27:32.204180 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-11-01 13:27:32.204190 | orchestrator | Saturday 01 November 2025 13:24:53 +0000 (0:00:03.837) 0:02:31.518 ***** 2025-11-01 13:27:32.204201 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-11-01 13:27:32.204212 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-11-01 13:27:32.204222 | orchestrator | 2025-11-01 13:27:32.204233 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-11-01 13:27:32.204244 | orchestrator | Saturday 01 November 2025 13:25:01 +0000 (0:00:08.027) 0:02:39.546 ***** 2025-11-01 13:27:32.204254 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:27:32.204265 | orchestrator | 2025-11-01 13:27:32.204276 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-11-01 13:27:32.204287 | orchestrator | Saturday 01 November 2025 13:25:05 +0000 (0:00:04.398) 0:02:43.944 ***** 2025-11-01 13:27:32.204297 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:27:32.204308 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:27:32.204318 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:27:32.204329 | orchestrator | 2025-11-01 13:27:32.204358 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-11-01 13:27:32.204370 | orchestrator | Saturday 01 November 2025 13:25:06 +0000 (0:00:00.342) 0:02:44.286 ***** 2025-11-01 13:27:32.204384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 13:27:32.204432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 13:27:32.204454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 13:27:32.204471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 13:27:32.204485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 13:27:32.204496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 13:27:32.204509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.204521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.204567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.204581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.204598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.204609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.204621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:27:32.204632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:27:32.204651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:27:32.204663 | orchestrator | 2025-11-01 13:27:32.204674 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-11-01 13:27:32.204685 | orchestrator | Saturday 01 November 2025 13:25:08 +0000 (0:00:02.858) 0:02:47.145 ***** 2025-11-01 13:27:32.204696 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:27:32.204707 | orchestrator | 2025-11-01 13:27:32.204746 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-11-01 13:27:32.204758 | orchestrator | Saturday 01 November 2025 13:25:09 +0000 (0:00:00.141) 0:02:47.286 ***** 2025-11-01 13:27:32.204770 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:27:32.204781 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:27:32.204791 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:27:32.204802 | orchestrator | 2025-11-01 13:27:32.204813 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-11-01 13:27:32.204824 | orchestrator | Saturday 01 November 2025 13:25:09 +0000 (0:00:00.599) 0:02:47.886 ***** 2025-11-01 13:27:32.204841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 13:27:32.204853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 13:27:32.204865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 13:27:32.204877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 13:27:32.204896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:27:32.204907 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:27:32.204951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 13:27:32.204964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 13:27:32.204986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 13:27:32.204998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 13:27:32.205009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:27:32.205027 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:27:32.205039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 13:27:32.205080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 13:27:32.205093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 13:27:32.205109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 13:27:32.205121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:27:32.205132 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:27:32.205143 | orchestrator | 2025-11-01 13:27:32.205154 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-01 13:27:32.205171 | orchestrator | Saturday 01 November 2025 13:25:10 +0000 (0:00:00.771) 0:02:48.657 ***** 2025-11-01 13:27:32.205182 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:27:32.205193 | orchestrator | 2025-11-01 13:27:32.205203 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-11-01 13:27:32.205214 | orchestrator | Saturday 01 November 2025 13:25:11 +0000 (0:00:00.618) 0:02:49.276 ***** 2025-11-01 13:27:32.205225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 13:27:32.205267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 13:27:32.205280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 13:27:32.205297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 13:27:32.205309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 13:27:32.205327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 13:27:32.205358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.205371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.205389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.205401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.205418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.205437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.205448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:27:32.205460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:27:32.205479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:27:32.205491 | orchestrator | 2025-11-01 13:27:32.205502 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-11-01 13:27:32.205513 | orchestrator | Saturday 01 November 2025 13:25:16 +0000 (0:00:05.431) 0:02:54.708 ***** 2025-11-01 13:27:32.205524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 13:27:32.205541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 13:27:32.205559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 13:27:32.205570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 13:27:32.205582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:27:32.205593 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:27:32.205612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 13:27:32.205624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 13:27:32.205640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 13:27:32.205658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 13:27:32.205670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:27:32.205681 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:27:32.205692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 13:27:32.205704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 13:27:32.205721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 13:27:32.205733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 13:27:32.205757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:27:32.205768 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:27:32.205779 | orchestrator | 2025-11-01 13:27:32.205790 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-11-01 13:27:32.205801 | orchestrator | Saturday 01 November 2025 13:25:17 +0000 (0:00:01.068) 0:02:55.777 ***** 2025-11-01 13:27:32.205812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 13:27:32.205824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 13:27:32.205836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 13:27:32.205853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 13:27:32.205864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:27:32.205883 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:27:32.205903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 13:27:32.205915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 13:27:32.205926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 13:27:32.205938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 13:27:32.205956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:27:32.205967 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:27:32.205983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-11-01 13:27:32.206004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-11-01 13:27:32.206050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-11-01 13:27:32.206064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-11-01 13:27:32.206076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-11-01 13:27:32.206087 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:27:32.206098 | orchestrator | 2025-11-01 13:27:32.206109 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-11-01 13:27:32.206120 | orchestrator | Saturday 01 November 2025 13:25:18 +0000 (0:00:01.026) 0:02:56.803 ***** 2025-11-01 13:27:32.206139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 13:27:32.206164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 13:27:32.206176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 13:27:32.206187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 13:27:32.206198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 13:27:32.206210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 13:27:32.206227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.206245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.206262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.206274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.206285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.206297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.206314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:27:32.206332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:27:32.206377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:27:32.206389 | orchestrator | 2025-11-01 13:27:32.206400 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-11-01 13:27:32.206411 | orchestrator | Saturday 01 November 2025 13:25:24 +0000 (0:00:05.509) 0:03:02.312 ***** 2025-11-01 13:27:32.206422 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-11-01 13:27:32.206433 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-11-01 13:27:32.206444 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-11-01 13:27:32.206455 | orchestrator | 2025-11-01 13:27:32.206466 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-11-01 13:27:32.206477 | orchestrator | Saturday 01 November 2025 13:25:26 +0000 (0:00:02.132) 0:03:04.445 ***** 2025-11-01 13:27:32.206488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 13:27:32.206500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 13:27:32.206526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 13:27:32.206539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 13:27:32.206555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 13:27:32.206567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 13:27:32.206578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.206589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.206601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.206624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.206636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.206652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.206664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:27:32.206675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:27:32.206687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:27:32.206704 | orchestrator | 2025-11-01 13:27:32.206715 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-11-01 13:27:32.206726 | orchestrator | Saturday 01 November 2025 13:25:44 +0000 (0:00:18.300) 0:03:22.746 ***** 2025-11-01 13:27:32.206737 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.206749 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:27:32.206759 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:27:32.206770 | orchestrator | 2025-11-01 13:27:32.206781 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-11-01 13:27:32.206792 | orchestrator | Saturday 01 November 2025 13:25:46 +0000 (0:00:01.670) 0:03:24.416 ***** 2025-11-01 13:27:32.206802 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-11-01 13:27:32.206813 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-11-01 13:27:32.206829 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-11-01 13:27:32.206840 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-11-01 13:27:32.206851 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-11-01 13:27:32.206862 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-11-01 13:27:32.206873 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-11-01 13:27:32.206884 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-11-01 13:27:32.206895 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-11-01 13:27:32.206905 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-11-01 13:27:32.206916 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-11-01 13:27:32.206926 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-11-01 13:27:32.206937 | orchestrator | 2025-11-01 13:27:32.206948 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-11-01 13:27:32.206959 | orchestrator | Saturday 01 November 2025 13:25:51 +0000 (0:00:05.656) 0:03:30.072 ***** 2025-11-01 13:27:32.206970 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-11-01 13:27:32.206981 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-11-01 13:27:32.206991 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-11-01 13:27:32.207002 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-11-01 13:27:32.207013 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-11-01 13:27:32.207023 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-11-01 13:27:32.207034 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-11-01 13:27:32.207050 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-11-01 13:27:32.207061 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-11-01 13:27:32.207072 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-11-01 13:27:32.207082 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-11-01 13:27:32.207093 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-11-01 13:27:32.207104 | orchestrator | 2025-11-01 13:27:32.207114 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-11-01 13:27:32.207125 | orchestrator | Saturday 01 November 2025 13:25:57 +0000 (0:00:05.858) 0:03:35.931 ***** 2025-11-01 13:27:32.207136 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-11-01 13:27:32.207147 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-11-01 13:27:32.207158 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-11-01 13:27:32.207169 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-11-01 13:27:32.207190 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-11-01 13:27:32.207201 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-11-01 13:27:32.207212 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-11-01 13:27:32.207222 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-11-01 13:27:32.207233 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-11-01 13:27:32.207244 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-11-01 13:27:32.207255 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-11-01 13:27:32.207265 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-11-01 13:27:32.207276 | orchestrator | 2025-11-01 13:27:32.207287 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-11-01 13:27:32.207297 | orchestrator | Saturday 01 November 2025 13:26:03 +0000 (0:00:05.427) 0:03:41.359 ***** 2025-11-01 13:27:32.207309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 13:27:32.207327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 13:27:32.207362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-11-01 13:27:32.207374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 13:27:32.207392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 13:27:32.207404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-11-01 13:27:32.207415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.207432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.207444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.207460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.207472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.207489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-11-01 13:27:32.207500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:27:32.207512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:27:32.207528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-11-01 13:27:32.207540 | orchestrator | 2025-11-01 13:27:32.207551 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-11-01 13:27:32.207562 | orchestrator | Saturday 01 November 2025 13:26:07 +0000 (0:00:03.987) 0:03:45.346 ***** 2025-11-01 13:27:32.207573 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:27:32.207584 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:27:32.207595 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:27:32.207605 | orchestrator | 2025-11-01 13:27:32.207616 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-11-01 13:27:32.207627 | orchestrator | Saturday 01 November 2025 13:26:07 +0000 (0:00:00.351) 0:03:45.697 ***** 2025-11-01 13:27:32.207638 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.207649 | orchestrator | 2025-11-01 13:27:32.207660 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-11-01 13:27:32.207670 | orchestrator | Saturday 01 November 2025 13:26:09 +0000 (0:00:02.430) 0:03:48.127 ***** 2025-11-01 13:27:32.207681 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.207692 | orchestrator | 2025-11-01 13:27:32.207715 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-11-01 13:27:32.207726 | orchestrator | Saturday 01 November 2025 13:26:12 +0000 (0:00:02.552) 0:03:50.680 ***** 2025-11-01 13:27:32.207737 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.207748 | orchestrator | 2025-11-01 13:27:32.207758 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-11-01 13:27:32.207769 | orchestrator | Saturday 01 November 2025 13:26:15 +0000 (0:00:02.640) 0:03:53.320 ***** 2025-11-01 13:27:32.207780 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.207791 | orchestrator | 2025-11-01 13:27:32.207806 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-11-01 13:27:32.207817 | orchestrator | Saturday 01 November 2025 13:26:18 +0000 (0:00:03.233) 0:03:56.554 ***** 2025-11-01 13:27:32.207828 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.207839 | orchestrator | 2025-11-01 13:27:32.207849 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-11-01 13:27:32.207860 | orchestrator | Saturday 01 November 2025 13:26:41 +0000 (0:00:23.525) 0:04:20.079 ***** 2025-11-01 13:27:32.207871 | orchestrator | 2025-11-01 13:27:32.207882 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-11-01 13:27:32.207892 | orchestrator | Saturday 01 November 2025 13:26:41 +0000 (0:00:00.086) 0:04:20.166 ***** 2025-11-01 13:27:32.207903 | orchestrator | 2025-11-01 13:27:32.207914 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-11-01 13:27:32.207925 | orchestrator | Saturday 01 November 2025 13:26:42 +0000 (0:00:00.096) 0:04:20.262 ***** 2025-11-01 13:27:32.207935 | orchestrator | 2025-11-01 13:27:32.207946 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-11-01 13:27:32.207957 | orchestrator | Saturday 01 November 2025 13:26:42 +0000 (0:00:00.076) 0:04:20.338 ***** 2025-11-01 13:27:32.207968 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.207979 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:27:32.207990 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:27:32.208001 | orchestrator | 2025-11-01 13:27:32.208012 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-11-01 13:27:32.208023 | orchestrator | Saturday 01 November 2025 13:26:53 +0000 (0:00:11.767) 0:04:32.106 ***** 2025-11-01 13:27:32.208034 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.208045 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:27:32.208055 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:27:32.208066 | orchestrator | 2025-11-01 13:27:32.208077 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-11-01 13:27:32.208088 | orchestrator | Saturday 01 November 2025 13:27:00 +0000 (0:00:06.930) 0:04:39.036 ***** 2025-11-01 13:27:32.208098 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:27:32.208109 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.208120 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:27:32.208131 | orchestrator | 2025-11-01 13:27:32.208141 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-11-01 13:27:32.208152 | orchestrator | Saturday 01 November 2025 13:27:11 +0000 (0:00:10.869) 0:04:49.905 ***** 2025-11-01 13:27:32.208163 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:27:32.208174 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:27:32.208185 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.208195 | orchestrator | 2025-11-01 13:27:32.208206 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-11-01 13:27:32.208217 | orchestrator | Saturday 01 November 2025 13:27:20 +0000 (0:00:09.100) 0:04:59.006 ***** 2025-11-01 13:27:32.208228 | orchestrator | changed: [testbed-node-1] 2025-11-01 13:27:32.208238 | orchestrator | changed: [testbed-node-2] 2025-11-01 13:27:32.208249 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:27:32.208260 | orchestrator | 2025-11-01 13:27:32.208271 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:27:32.208282 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-11-01 13:27:32.208299 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 13:27:32.208311 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-11-01 13:27:32.208322 | orchestrator | 2025-11-01 13:27:32.208333 | orchestrator | 2025-11-01 13:27:32.208393 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:27:32.208405 | orchestrator | Saturday 01 November 2025 13:27:29 +0000 (0:00:08.749) 0:05:07.755 ***** 2025-11-01 13:27:32.208422 | orchestrator | =============================================================================== 2025-11-01 13:27:32.208433 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 23.53s 2025-11-01 13:27:32.208444 | orchestrator | octavia : Add rules for security groups -------------------------------- 19.12s 2025-11-01 13:27:32.208455 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.30s 2025-11-01 13:27:32.208465 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.78s 2025-11-01 13:27:32.208476 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.77s 2025-11-01 13:27:32.208487 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.69s 2025-11-01 13:27:32.208497 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.87s 2025-11-01 13:27:32.208508 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 9.10s 2025-11-01 13:27:32.208519 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 9.03s 2025-11-01 13:27:32.208529 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 8.75s 2025-11-01 13:27:32.208540 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.47s 2025-11-01 13:27:32.208551 | orchestrator | octavia : Get security groups for octavia ------------------------------- 8.03s 2025-11-01 13:27:32.208561 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.46s 2025-11-01 13:27:32.208572 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.93s 2025-11-01 13:27:32.208583 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.87s 2025-11-01 13:27:32.208599 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.86s 2025-11-01 13:27:32.208610 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.66s 2025-11-01 13:27:32.208620 | orchestrator | octavia : Create loadbalancer management network ------------------------ 5.60s 2025-11-01 13:27:32.208631 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.51s 2025-11-01 13:27:32.208642 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.47s 2025-11-01 13:27:32.208653 | orchestrator | 2025-11-01 13:27:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:27:35.243707 | orchestrator | 2025-11-01 13:27:35 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:27:35.243796 | orchestrator | 2025-11-01 13:27:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:27:38.288878 | orchestrator | 2025-11-01 13:27:38 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:27:38.288975 | orchestrator | 2025-11-01 13:27:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:27:41.334276 | orchestrator | 2025-11-01 13:27:41 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:27:41.334388 | orchestrator | 2025-11-01 13:27:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:27:44.384287 | orchestrator | 2025-11-01 13:27:44 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:27:44.384435 | orchestrator | 2025-11-01 13:27:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:27:47.427568 | orchestrator | 2025-11-01 13:27:47 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:27:47.427662 | orchestrator | 2025-11-01 13:27:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:27:50.478858 | orchestrator | 2025-11-01 13:27:50 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:27:50.478955 | orchestrator | 2025-11-01 13:27:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:27:53.529246 | orchestrator | 2025-11-01 13:27:53 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:27:53.529372 | orchestrator | 2025-11-01 13:27:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:27:56.575970 | orchestrator | 2025-11-01 13:27:56 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:27:56.576058 | orchestrator | 2025-11-01 13:27:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:27:59.620725 | orchestrator | 2025-11-01 13:27:59 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:27:59.620816 | orchestrator | 2025-11-01 13:27:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:28:02.659511 | orchestrator | 2025-11-01 13:28:02 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:28:02.659628 | orchestrator | 2025-11-01 13:28:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:28:05.701554 | orchestrator | 2025-11-01 13:28:05 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:28:05.701644 | orchestrator | 2025-11-01 13:28:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:28:08.744284 | orchestrator | 2025-11-01 13:28:08 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:28:08.744416 | orchestrator | 2025-11-01 13:28:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:28:11.787285 | orchestrator | 2025-11-01 13:28:11 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:28:11.787406 | orchestrator | 2025-11-01 13:28:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:28:14.831980 | orchestrator | 2025-11-01 13:28:14 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:28:14.832080 | orchestrator | 2025-11-01 13:28:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:28:17.870801 | orchestrator | 2025-11-01 13:28:17 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:28:17.870893 | orchestrator | 2025-11-01 13:28:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:28:20.911939 | orchestrator | 2025-11-01 13:28:20 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:28:20.912040 | orchestrator | 2025-11-01 13:28:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:28:23.947424 | orchestrator | 2025-11-01 13:28:23 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:28:23.947529 | orchestrator | 2025-11-01 13:28:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:28:26.989565 | orchestrator | 2025-11-01 13:28:26 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:28:26.989663 | orchestrator | 2025-11-01 13:28:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:28:30.038727 | orchestrator | 2025-11-01 13:28:30 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:28:30.038844 | orchestrator | 2025-11-01 13:28:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:28:33.079728 | orchestrator | 2025-11-01 13:28:33 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:28:33.079807 | orchestrator | 2025-11-01 13:28:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:28:36.120824 | orchestrator | 2025-11-01 13:28:36 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:28:36.120907 | orchestrator | 2025-11-01 13:28:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:28:39.161237 | orchestrator | 2025-11-01 13:28:39 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:28:39.161330 | orchestrator | 2025-11-01 13:28:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:28:42.215414 | orchestrator | 2025-11-01 13:28:42 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:28:42.215505 | orchestrator | 2025-11-01 13:28:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:28:45.258279 | orchestrator | 2025-11-01 13:28:45 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:28:45.258395 | orchestrator | 2025-11-01 13:28:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:28:48.308935 | orchestrator | 2025-11-01 13:28:48 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:28:48.309109 | orchestrator | 2025-11-01 13:28:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:28:51.354901 | orchestrator | 2025-11-01 13:28:51 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:28:51.354961 | orchestrator | 2025-11-01 13:28:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:28:54.405867 | orchestrator | 2025-11-01 13:28:54 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:28:54.405943 | orchestrator | 2025-11-01 13:28:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:28:57.442967 | orchestrator | 2025-11-01 13:28:57 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:28:57.443035 | orchestrator | 2025-11-01 13:28:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:29:00.486225 | orchestrator | 2025-11-01 13:29:00 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:29:00.486301 | orchestrator | 2025-11-01 13:29:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:29:03.528911 | orchestrator | 2025-11-01 13:29:03 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:29:03.528954 | orchestrator | 2025-11-01 13:29:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:29:06.569645 | orchestrator | 2025-11-01 13:29:06 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:29:06.569739 | orchestrator | 2025-11-01 13:29:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:29:09.613556 | orchestrator | 2025-11-01 13:29:09 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:29:09.613671 | orchestrator | 2025-11-01 13:29:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:29:12.657878 | orchestrator | 2025-11-01 13:29:12 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:29:12.657967 | orchestrator | 2025-11-01 13:29:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:29:15.694754 | orchestrator | 2025-11-01 13:29:15 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:29:15.694854 | orchestrator | 2025-11-01 13:29:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:29:18.741584 | orchestrator | 2025-11-01 13:29:18 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:29:18.741680 | orchestrator | 2025-11-01 13:29:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:29:21.790577 | orchestrator | 2025-11-01 13:29:21 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:29:21.790704 | orchestrator | 2025-11-01 13:29:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:29:24.840150 | orchestrator | 2025-11-01 13:29:24 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:29:24.840248 | orchestrator | 2025-11-01 13:29:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:29:27.877361 | orchestrator | 2025-11-01 13:29:27 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:29:27.877459 | orchestrator | 2025-11-01 13:29:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:29:30.916182 | orchestrator | 2025-11-01 13:29:30 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:29:30.916289 | orchestrator | 2025-11-01 13:29:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:29:33.957425 | orchestrator | 2025-11-01 13:29:33 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:29:33.957513 | orchestrator | 2025-11-01 13:29:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:29:37.000105 | orchestrator | 2025-11-01 13:29:36 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:29:37.000210 | orchestrator | 2025-11-01 13:29:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:29:40.045553 | orchestrator | 2025-11-01 13:29:40 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:29:40.045631 | orchestrator | 2025-11-01 13:29:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:29:43.088074 | orchestrator | 2025-11-01 13:29:43 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:29:43.088161 | orchestrator | 2025-11-01 13:29:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:29:46.129241 | orchestrator | 2025-11-01 13:29:46 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:29:46.129323 | orchestrator | 2025-11-01 13:29:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:29:49.173908 | orchestrator | 2025-11-01 13:29:49 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:29:49.173986 | orchestrator | 2025-11-01 13:29:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:29:52.220092 | orchestrator | 2025-11-01 13:29:52 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:29:52.220151 | orchestrator | 2025-11-01 13:29:52 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:29:55.266741 | orchestrator | 2025-11-01 13:29:55 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:29:55.266838 | orchestrator | 2025-11-01 13:29:55 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:29:58.315549 | orchestrator | 2025-11-01 13:29:58 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:29:58.315652 | orchestrator | 2025-11-01 13:29:58 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:30:01.362137 | orchestrator | 2025-11-01 13:30:01 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:30:01.362190 | orchestrator | 2025-11-01 13:30:01 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:30:04.410820 | orchestrator | 2025-11-01 13:30:04 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:30:04.410899 | orchestrator | 2025-11-01 13:30:04 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:30:07.457282 | orchestrator | 2025-11-01 13:30:07 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:30:07.457407 | orchestrator | 2025-11-01 13:30:07 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:30:10.503486 | orchestrator | 2025-11-01 13:30:10 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:30:10.503554 | orchestrator | 2025-11-01 13:30:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:30:13.540049 | orchestrator | 2025-11-01 13:30:13 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:30:13.540127 | orchestrator | 2025-11-01 13:30:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:30:16.583009 | orchestrator | 2025-11-01 13:30:16 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:30:16.583112 | orchestrator | 2025-11-01 13:30:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:30:19.632630 | orchestrator | 2025-11-01 13:30:19 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:30:19.632732 | orchestrator | 2025-11-01 13:30:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:30:22.682283 | orchestrator | 2025-11-01 13:30:22 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:30:22.682420 | orchestrator | 2025-11-01 13:30:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:30:25.729969 | orchestrator | 2025-11-01 13:30:25 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:30:25.730113 | orchestrator | 2025-11-01 13:30:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:30:28.770493 | orchestrator | 2025-11-01 13:30:28 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:30:28.770580 | orchestrator | 2025-11-01 13:30:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:30:31.812752 | orchestrator | 2025-11-01 13:30:31 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:30:31.812867 | orchestrator | 2025-11-01 13:30:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:30:34.858309 | orchestrator | 2025-11-01 13:30:34 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:30:34.858455 | orchestrator | 2025-11-01 13:30:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:30:37.906606 | orchestrator | 2025-11-01 13:30:37 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:30:37.906709 | orchestrator | 2025-11-01 13:30:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:30:40.955808 | orchestrator | 2025-11-01 13:30:40 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:30:40.955912 | orchestrator | 2025-11-01 13:30:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:30:43.989361 | orchestrator | 2025-11-01 13:30:43 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:30:43.989460 | orchestrator | 2025-11-01 13:30:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:30:47.028082 | orchestrator | 2025-11-01 13:30:47 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:30:47.028174 | orchestrator | 2025-11-01 13:30:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:30:50.073310 | orchestrator | 2025-11-01 13:30:50 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:30:50.073426 | orchestrator | 2025-11-01 13:30:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:30:53.114836 | orchestrator | 2025-11-01 13:30:53 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:30:53.114944 | orchestrator | 2025-11-01 13:30:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:30:56.162277 | orchestrator | 2025-11-01 13:30:56 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:30:56.162429 | orchestrator | 2025-11-01 13:30:56 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:30:59.203231 | orchestrator | 2025-11-01 13:30:59 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:30:59.203310 | orchestrator | 2025-11-01 13:30:59 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:31:02.249163 | orchestrator | 2025-11-01 13:31:02 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:31:02.249238 | orchestrator | 2025-11-01 13:31:02 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:31:05.288400 | orchestrator | 2025-11-01 13:31:05 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:31:05.288485 | orchestrator | 2025-11-01 13:31:05 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:31:08.330256 | orchestrator | 2025-11-01 13:31:08 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:31:08.330392 | orchestrator | 2025-11-01 13:31:08 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:31:11.377856 | orchestrator | 2025-11-01 13:31:11 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:31:11.377952 | orchestrator | 2025-11-01 13:31:11 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:31:14.422082 | orchestrator | 2025-11-01 13:31:14 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:31:14.422181 | orchestrator | 2025-11-01 13:31:14 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:31:17.471923 | orchestrator | 2025-11-01 13:31:17 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:31:17.472036 | orchestrator | 2025-11-01 13:31:17 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:31:20.514220 | orchestrator | 2025-11-01 13:31:20 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:31:20.514361 | orchestrator | 2025-11-01 13:31:20 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:31:23.545736 | orchestrator | 2025-11-01 13:31:23 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:31:23.545836 | orchestrator | 2025-11-01 13:31:23 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:31:26.590736 | orchestrator | 2025-11-01 13:31:26 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:31:26.590837 | orchestrator | 2025-11-01 13:31:26 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:31:29.633534 | orchestrator | 2025-11-01 13:31:29 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:31:29.633630 | orchestrator | 2025-11-01 13:31:29 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:31:32.679083 | orchestrator | 2025-11-01 13:31:32 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:31:32.679192 | orchestrator | 2025-11-01 13:31:32 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:31:35.721182 | orchestrator | 2025-11-01 13:31:35 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:31:35.721269 | orchestrator | 2025-11-01 13:31:35 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:31:38.767293 | orchestrator | 2025-11-01 13:31:38 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:31:38.767458 | orchestrator | 2025-11-01 13:31:38 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:31:41.814810 | orchestrator | 2025-11-01 13:31:41 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:31:41.814882 | orchestrator | 2025-11-01 13:31:41 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:31:44.856902 | orchestrator | 2025-11-01 13:31:44 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:31:44.856998 | orchestrator | 2025-11-01 13:31:44 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:31:47.900517 | orchestrator | 2025-11-01 13:31:47 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:31:47.900620 | orchestrator | 2025-11-01 13:31:47 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:31:50.946385 | orchestrator | 2025-11-01 13:31:50 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:31:50.946481 | orchestrator | 2025-11-01 13:31:50 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:31:53.990712 | orchestrator | 2025-11-01 13:31:53 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:31:53.990810 | orchestrator | 2025-11-01 13:31:53 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:31:57.028447 | orchestrator | 2025-11-01 13:31:57 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:31:57.028541 | orchestrator | 2025-11-01 13:31:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:32:00.064374 | orchestrator | 2025-11-01 13:32:00 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:32:00.064463 | orchestrator | 2025-11-01 13:32:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:32:03.100986 | orchestrator | 2025-11-01 13:32:03 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:32:03.101081 | orchestrator | 2025-11-01 13:32:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:32:06.137559 | orchestrator | 2025-11-01 13:32:06 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:32:06.137650 | orchestrator | 2025-11-01 13:32:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:32:09.174294 | orchestrator | 2025-11-01 13:32:09 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:32:09.174423 | orchestrator | 2025-11-01 13:32:09 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:32:12.215124 | orchestrator | 2025-11-01 13:32:12 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:32:12.215210 | orchestrator | 2025-11-01 13:32:12 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:32:15.246368 | orchestrator | 2025-11-01 13:32:15 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:32:15.246487 | orchestrator | 2025-11-01 13:32:15 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:32:18.283437 | orchestrator | 2025-11-01 13:32:18 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:32:18.283537 | orchestrator | 2025-11-01 13:32:18 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:32:21.329489 | orchestrator | 2025-11-01 13:32:21 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:32:21.329577 | orchestrator | 2025-11-01 13:32:21 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:32:24.379573 | orchestrator | 2025-11-01 13:32:24 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:32:24.379676 | orchestrator | 2025-11-01 13:32:24 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:32:27.418200 | orchestrator | 2025-11-01 13:32:27 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:32:27.418260 | orchestrator | 2025-11-01 13:32:27 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:32:30.467818 | orchestrator | 2025-11-01 13:32:30 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:32:30.467894 | orchestrator | 2025-11-01 13:32:30 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:32:33.511635 | orchestrator | 2025-11-01 13:32:33 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:32:33.511716 | orchestrator | 2025-11-01 13:32:33 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:32:36.557700 | orchestrator | 2025-11-01 13:32:36 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:32:36.557789 | orchestrator | 2025-11-01 13:32:36 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:32:39.600693 | orchestrator | 2025-11-01 13:32:39 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:32:39.600788 | orchestrator | 2025-11-01 13:32:39 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:32:42.641864 | orchestrator | 2025-11-01 13:32:42 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:32:42.641963 | orchestrator | 2025-11-01 13:32:42 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:32:45.687512 | orchestrator | 2025-11-01 13:32:45 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:32:45.687603 | orchestrator | 2025-11-01 13:32:45 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:32:48.731389 | orchestrator | 2025-11-01 13:32:48 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:32:48.731491 | orchestrator | 2025-11-01 13:32:48 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:32:51.773146 | orchestrator | 2025-11-01 13:32:51 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:32:51.773240 | orchestrator | 2025-11-01 13:32:51 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:32:54.809959 | orchestrator | 2025-11-01 13:32:54 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:32:54.810068 | orchestrator | 2025-11-01 13:32:54 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:32:57.845609 | orchestrator | 2025-11-01 13:32:57 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:32:57.845700 | orchestrator | 2025-11-01 13:32:57 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:33:00.889615 | orchestrator | 2025-11-01 13:33:00 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:33:00.889711 | orchestrator | 2025-11-01 13:33:00 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:33:03.938135 | orchestrator | 2025-11-01 13:33:03 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:33:03.938230 | orchestrator | 2025-11-01 13:33:03 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:33:06.988257 | orchestrator | 2025-11-01 13:33:06 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:33:06.988397 | orchestrator | 2025-11-01 13:33:06 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:33:10.040394 | orchestrator | 2025-11-01 13:33:10 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:33:10.040488 | orchestrator | 2025-11-01 13:33:10 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:33:13.082417 | orchestrator | 2025-11-01 13:33:13 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:33:13.082525 | orchestrator | 2025-11-01 13:33:13 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:33:16.122182 | orchestrator | 2025-11-01 13:33:16 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:33:16.122273 | orchestrator | 2025-11-01 13:33:16 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:33:19.169187 | orchestrator | 2025-11-01 13:33:19 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:33:19.169279 | orchestrator | 2025-11-01 13:33:19 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:33:22.222951 | orchestrator | 2025-11-01 13:33:22 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:33:22.223069 | orchestrator | 2025-11-01 13:33:22 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:33:25.261983 | orchestrator | 2025-11-01 13:33:25 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:33:25.262126 | orchestrator | 2025-11-01 13:33:25 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:33:28.304952 | orchestrator | 2025-11-01 13:33:28 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:33:28.305051 | orchestrator | 2025-11-01 13:33:28 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:33:31.354559 | orchestrator | 2025-11-01 13:33:31 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:33:31.354651 | orchestrator | 2025-11-01 13:33:31 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:33:34.394894 | orchestrator | 2025-11-01 13:33:34 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:33:34.394986 | orchestrator | 2025-11-01 13:33:34 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:33:37.441480 | orchestrator | 2025-11-01 13:33:37 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:33:37.441566 | orchestrator | 2025-11-01 13:33:37 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:33:40.486147 | orchestrator | 2025-11-01 13:33:40 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:33:40.486240 | orchestrator | 2025-11-01 13:33:40 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:33:43.527647 | orchestrator | 2025-11-01 13:33:43 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:33:43.527742 | orchestrator | 2025-11-01 13:33:43 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:33:46.571978 | orchestrator | 2025-11-01 13:33:46 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:33:46.572078 | orchestrator | 2025-11-01 13:33:46 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:33:49.625532 | orchestrator | 2025-11-01 13:33:49 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state STARTED 2025-11-01 13:33:49.625650 | orchestrator | 2025-11-01 13:33:49 | INFO  | Wait 1 second(s) until the next check 2025-11-01 13:33:52.666743 | orchestrator | 2025-11-01 13:33:52 | INFO  | Task d2849631-402d-43da-9027-8d3c2ac08405 is in state SUCCESS 2025-11-01 13:33:52.666841 | orchestrator | 2025-11-01 13:33:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 13:33:55.707703 | orchestrator | 2025-11-01 13:33:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 13:33:58.749284 | orchestrator | 2025-11-01 13:33:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 13:34:01.780063 | orchestrator | 2025-11-01 13:34:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 13:34:04.811862 | orchestrator | 2025-11-01 13:34:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 13:34:07.851595 | orchestrator | 2025-11-01 13:34:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 13:34:10.901030 | orchestrator | 2025-11-01 13:34:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 13:34:13.946953 | orchestrator | 2025-11-01 13:34:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 13:34:16.981316 | orchestrator | 2025-11-01 13:34:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 13:34:20.019461 | orchestrator | 2025-11-01 13:34:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 13:34:23.058525 | orchestrator | 2025-11-01 13:34:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 13:34:26.095813 | orchestrator | 2025-11-01 13:34:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 13:34:29.132826 | orchestrator | 2025-11-01 13:34:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 13:34:32.174074 | orchestrator | 2025-11-01 13:34:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 13:34:35.218711 | orchestrator | 2025-11-01 13:34:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 13:34:38.258475 | orchestrator | 2025-11-01 13:34:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 13:34:41.298777 | orchestrator | 2025-11-01 13:34:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 13:34:44.339857 | orchestrator | 2025-11-01 13:34:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 13:34:47.374225 | orchestrator | 2025-11-01 13:34:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 13:34:50.413624 | orchestrator | 2025-11-01 13:34:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-11-01 13:34:53.456455 | orchestrator | 2025-11-01 13:34:53.456564 | orchestrator | 2025-11-01 13:34:53.456580 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-11-01 13:34:53.456592 | orchestrator | 2025-11-01 13:34:53.456639 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-11-01 13:34:53.456652 | orchestrator | Saturday 01 November 2025 13:17:21 +0000 (0:00:00.235) 0:00:00.235 ***** 2025-11-01 13:34:53.456664 | orchestrator | changed: [localhost] 2025-11-01 13:34:53.456676 | orchestrator | 2025-11-01 13:34:53.456687 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-11-01 13:34:53.456699 | orchestrator | Saturday 01 November 2025 13:17:24 +0000 (0:00:02.897) 0:00:03.132 ***** 2025-11-01 13:34:53.456710 | orchestrator | 2025-11-01 13:34:53.456721 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.456732 | orchestrator | 2025-11-01 13:34:53.456769 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.456780 | orchestrator | 2025-11-01 13:34:53.456791 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.456802 | orchestrator | 2025-11-01 13:34:53.456812 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.456823 | orchestrator | 2025-11-01 13:34:53.456834 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.456844 | orchestrator | 2025-11-01 13:34:53.456855 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.456866 | orchestrator | 2025-11-01 13:34:53.456876 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.456887 | orchestrator | 2025-11-01 13:34:53.456897 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.456908 | orchestrator | 2025-11-01 13:34:53.456919 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.456929 | orchestrator | 2025-11-01 13:34:53.456939 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.456950 | orchestrator | 2025-11-01 13:34:53.456961 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.456973 | orchestrator | 2025-11-01 13:34:53.456984 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.456996 | orchestrator | 2025-11-01 13:34:53.457008 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.457020 | orchestrator | 2025-11-01 13:34:53.457032 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.457044 | orchestrator | 2025-11-01 13:34:53.457055 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.457067 | orchestrator | 2025-11-01 13:34:53.457080 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.457092 | orchestrator | 2025-11-01 13:34:53.457104 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.457116 | orchestrator | 2025-11-01 13:34:53.457128 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.457140 | orchestrator | 2025-11-01 13:34:53.457152 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.457164 | orchestrator | 2025-11-01 13:34:53.457176 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.457189 | orchestrator | 2025-11-01 13:34:53.457201 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.457213 | orchestrator | 2025-11-01 13:34:53.457224 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.457236 | orchestrator | 2025-11-01 13:34:53.457248 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.457260 | orchestrator | 2025-11-01 13:34:53.457272 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.457283 | orchestrator | 2025-11-01 13:34:53.457295 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.457307 | orchestrator | 2025-11-01 13:34:53.457337 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.457350 | orchestrator | 2025-11-01 13:34:53.457362 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.457373 | orchestrator | 2025-11-01 13:34:53.457384 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-11-01 13:34:53.457394 | orchestrator | changed: [localhost] 2025-11-01 13:34:53.457405 | orchestrator | 2025-11-01 13:34:53.457416 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-11-01 13:34:53.457427 | orchestrator | Saturday 01 November 2025 13:33:43 +0000 (0:16:19.918) 0:16:23.051 ***** 2025-11-01 13:34:53.457437 | orchestrator | changed: [localhost] 2025-11-01 13:34:53.457455 | orchestrator | 2025-11-01 13:34:53.457466 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:34:53.457476 | orchestrator | 2025-11-01 13:34:53.457487 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:34:53.457498 | orchestrator | Saturday 01 November 2025 13:33:48 +0000 (0:00:04.269) 0:16:27.320 ***** 2025-11-01 13:34:53.457508 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:34:53.457519 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:34:53.457530 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:34:53.457540 | orchestrator | 2025-11-01 13:34:53.457566 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:34:53.457577 | orchestrator | Saturday 01 November 2025 13:33:48 +0000 (0:00:00.384) 0:16:27.705 ***** 2025-11-01 13:34:53.457588 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-11-01 13:34:53.457599 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-11-01 13:34:53.457610 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-11-01 13:34:53.457621 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-11-01 13:34:53.457631 | orchestrator | 2025-11-01 13:34:53.457642 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-11-01 13:34:53.457653 | orchestrator | skipping: no hosts matched 2025-11-01 13:34:53.457664 | orchestrator | 2025-11-01 13:34:53.457675 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:34:53.457702 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:34:53.457714 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:34:53.457726 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:34:53.457737 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:34:53.457748 | orchestrator | 2025-11-01 13:34:53.457758 | orchestrator | 2025-11-01 13:34:53.457769 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:34:53.457780 | orchestrator | Saturday 01 November 2025 13:33:49 +0000 (0:00:00.682) 0:16:28.387 ***** 2025-11-01 13:34:53.457790 | orchestrator | =============================================================================== 2025-11-01 13:34:53.457801 | orchestrator | Download ironic-agent initramfs --------------------------------------- 979.92s 2025-11-01 13:34:53.457812 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.27s 2025-11-01 13:34:53.457822 | orchestrator | Ensure the destination directory exists --------------------------------- 2.90s 2025-11-01 13:34:53.457833 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2025-11-01 13:34:53.457844 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2025-11-01 13:34:53.457854 | orchestrator | 2025-11-01 13:34:53.869487 | orchestrator | 2025-11-01 13:34:53.876452 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Nov 1 13:34:53 UTC 2025 2025-11-01 13:34:53.876497 | orchestrator | 2025-11-01 13:34:54.217531 | orchestrator | ok: Runtime: 0:46:46.227262 2025-11-01 13:34:54.475121 | 2025-11-01 13:34:54.475266 | TASK [Bootstrap services] 2025-11-01 13:34:55.193733 | orchestrator | 2025-11-01 13:34:55.193876 | orchestrator | # BOOTSTRAP 2025-11-01 13:34:55.193898 | orchestrator | 2025-11-01 13:34:55.193912 | orchestrator | + set -e 2025-11-01 13:34:55.193925 | orchestrator | + echo 2025-11-01 13:34:55.193939 | orchestrator | + echo '# BOOTSTRAP' 2025-11-01 13:34:55.193957 | orchestrator | + echo 2025-11-01 13:34:55.194000 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-11-01 13:34:55.201661 | orchestrator | + set -e 2025-11-01 13:34:55.201690 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-11-01 13:35:00.651546 | orchestrator | 2025-11-01 13:35:00 | INFO  | It takes a moment until task f2245c79-945e-4808-9bd6-885ecf591445 (flavor-manager) has been started and output is visible here. 2025-11-01 13:35:09.832766 | orchestrator | 2025-11-01 13:35:05 | INFO  | Flavor SCS-1L-1 created 2025-11-01 13:35:09.832913 | orchestrator | 2025-11-01 13:35:05 | INFO  | Flavor SCS-1L-1-5 created 2025-11-01 13:35:09.832934 | orchestrator | 2025-11-01 13:35:05 | INFO  | Flavor SCS-1V-2 created 2025-11-01 13:35:09.832947 | orchestrator | 2025-11-01 13:35:05 | INFO  | Flavor SCS-1V-2-5 created 2025-11-01 13:35:09.832959 | orchestrator | 2025-11-01 13:35:05 | INFO  | Flavor SCS-1V-4 created 2025-11-01 13:35:09.832970 | orchestrator | 2025-11-01 13:35:05 | INFO  | Flavor SCS-1V-4-10 created 2025-11-01 13:35:09.832981 | orchestrator | 2025-11-01 13:35:06 | INFO  | Flavor SCS-1V-8 created 2025-11-01 13:35:09.832994 | orchestrator | 2025-11-01 13:35:06 | INFO  | Flavor SCS-1V-8-20 created 2025-11-01 13:35:09.833015 | orchestrator | 2025-11-01 13:35:06 | INFO  | Flavor SCS-2V-4 created 2025-11-01 13:35:09.833026 | orchestrator | 2025-11-01 13:35:06 | INFO  | Flavor SCS-2V-4-10 created 2025-11-01 13:35:09.833038 | orchestrator | 2025-11-01 13:35:06 | INFO  | Flavor SCS-2V-8 created 2025-11-01 13:35:09.833049 | orchestrator | 2025-11-01 13:35:06 | INFO  | Flavor SCS-2V-8-20 created 2025-11-01 13:35:09.833060 | orchestrator | 2025-11-01 13:35:07 | INFO  | Flavor SCS-2V-16 created 2025-11-01 13:35:09.833071 | orchestrator | 2025-11-01 13:35:07 | INFO  | Flavor SCS-2V-16-50 created 2025-11-01 13:35:09.833082 | orchestrator | 2025-11-01 13:35:07 | INFO  | Flavor SCS-4V-8 created 2025-11-01 13:35:09.833094 | orchestrator | 2025-11-01 13:35:07 | INFO  | Flavor SCS-4V-8-20 created 2025-11-01 13:35:09.833105 | orchestrator | 2025-11-01 13:35:07 | INFO  | Flavor SCS-4V-16 created 2025-11-01 13:35:09.833116 | orchestrator | 2025-11-01 13:35:07 | INFO  | Flavor SCS-4V-16-50 created 2025-11-01 13:35:09.833127 | orchestrator | 2025-11-01 13:35:08 | INFO  | Flavor SCS-4V-32 created 2025-11-01 13:35:09.833138 | orchestrator | 2025-11-01 13:35:08 | INFO  | Flavor SCS-4V-32-100 created 2025-11-01 13:35:09.833149 | orchestrator | 2025-11-01 13:35:08 | INFO  | Flavor SCS-8V-16 created 2025-11-01 13:35:09.833161 | orchestrator | 2025-11-01 13:35:08 | INFO  | Flavor SCS-8V-16-50 created 2025-11-01 13:35:09.833172 | orchestrator | 2025-11-01 13:35:08 | INFO  | Flavor SCS-8V-32 created 2025-11-01 13:35:09.833183 | orchestrator | 2025-11-01 13:35:08 | INFO  | Flavor SCS-8V-32-100 created 2025-11-01 13:35:09.833194 | orchestrator | 2025-11-01 13:35:08 | INFO  | Flavor SCS-16V-32 created 2025-11-01 13:35:09.833206 | orchestrator | 2025-11-01 13:35:09 | INFO  | Flavor SCS-16V-32-100 created 2025-11-01 13:35:09.833217 | orchestrator | 2025-11-01 13:35:09 | INFO  | Flavor SCS-2V-4-20s created 2025-11-01 13:35:09.833228 | orchestrator | 2025-11-01 13:35:09 | INFO  | Flavor SCS-4V-8-50s created 2025-11-01 13:35:09.833239 | orchestrator | 2025-11-01 13:35:09 | INFO  | Flavor SCS-8V-32-100s created 2025-11-01 13:35:12.423685 | orchestrator | 2025-11-01 13:35:12 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-11-01 13:35:12.498245 | orchestrator | 2025-11-01 13:35:12 | INFO  | Task d885d456-2894-41e6-a4c4-cbc82d3ad519 (bootstrap-basic) was prepared for execution. 2025-11-01 13:35:12.498304 | orchestrator | 2025-11-01 13:35:12 | INFO  | It takes a moment until task d885d456-2894-41e6-a4c4-cbc82d3ad519 (bootstrap-basic) has been started and output is visible here. 2025-11-01 13:36:19.912410 | orchestrator | 2025-11-01 13:36:19.912550 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-11-01 13:36:19.912569 | orchestrator | 2025-11-01 13:36:19.912581 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-11-01 13:36:19.912593 | orchestrator | Saturday 01 November 2025 13:35:17 +0000 (0:00:00.086) 0:00:00.086 ***** 2025-11-01 13:36:19.912605 | orchestrator | ok: [localhost] 2025-11-01 13:36:19.912617 | orchestrator | 2025-11-01 13:36:19.912629 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-11-01 13:36:19.912640 | orchestrator | Saturday 01 November 2025 13:35:19 +0000 (0:00:02.086) 0:00:02.172 ***** 2025-11-01 13:36:19.912651 | orchestrator | ok: [localhost] 2025-11-01 13:36:19.912662 | orchestrator | 2025-11-01 13:36:19.912674 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-11-01 13:36:19.912685 | orchestrator | Saturday 01 November 2025 13:35:28 +0000 (0:00:09.343) 0:00:11.516 ***** 2025-11-01 13:36:19.912696 | orchestrator | changed: [localhost] 2025-11-01 13:36:19.912707 | orchestrator | 2025-11-01 13:36:19.912718 | orchestrator | TASK [Get volume type local] *************************************************** 2025-11-01 13:36:19.912730 | orchestrator | Saturday 01 November 2025 13:35:37 +0000 (0:00:08.423) 0:00:19.940 ***** 2025-11-01 13:36:19.912742 | orchestrator | ok: [localhost] 2025-11-01 13:36:19.912753 | orchestrator | 2025-11-01 13:36:19.912764 | orchestrator | TASK [Create volume type local] ************************************************ 2025-11-01 13:36:19.912775 | orchestrator | Saturday 01 November 2025 13:35:45 +0000 (0:00:08.309) 0:00:28.249 ***** 2025-11-01 13:36:19.912791 | orchestrator | changed: [localhost] 2025-11-01 13:36:19.912802 | orchestrator | 2025-11-01 13:36:19.912813 | orchestrator | TASK [Create public network] *************************************************** 2025-11-01 13:36:19.912825 | orchestrator | Saturday 01 November 2025 13:35:53 +0000 (0:00:07.476) 0:00:35.725 ***** 2025-11-01 13:36:19.912836 | orchestrator | changed: [localhost] 2025-11-01 13:36:19.912847 | orchestrator | 2025-11-01 13:36:19.912858 | orchestrator | TASK [Set public network to default] ******************************************* 2025-11-01 13:36:19.912869 | orchestrator | Saturday 01 November 2025 13:35:59 +0000 (0:00:06.176) 0:00:41.901 ***** 2025-11-01 13:36:19.912880 | orchestrator | changed: [localhost] 2025-11-01 13:36:19.912891 | orchestrator | 2025-11-01 13:36:19.912902 | orchestrator | TASK [Create public subnet] **************************************************** 2025-11-01 13:36:19.912923 | orchestrator | Saturday 01 November 2025 13:36:06 +0000 (0:00:07.014) 0:00:48.916 ***** 2025-11-01 13:36:19.912935 | orchestrator | changed: [localhost] 2025-11-01 13:36:19.912946 | orchestrator | 2025-11-01 13:36:19.912957 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-11-01 13:36:19.912968 | orchestrator | Saturday 01 November 2025 13:36:11 +0000 (0:00:04.867) 0:00:53.783 ***** 2025-11-01 13:36:19.912979 | orchestrator | changed: [localhost] 2025-11-01 13:36:19.912990 | orchestrator | 2025-11-01 13:36:19.913001 | orchestrator | TASK [Create manager role] ***************************************************** 2025-11-01 13:36:19.913012 | orchestrator | Saturday 01 November 2025 13:36:15 +0000 (0:00:04.230) 0:00:58.013 ***** 2025-11-01 13:36:19.913024 | orchestrator | ok: [localhost] 2025-11-01 13:36:19.913035 | orchestrator | 2025-11-01 13:36:19.913046 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:36:19.913057 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:36:19.913069 | orchestrator | 2025-11-01 13:36:19.913080 | orchestrator | 2025-11-01 13:36:19.913091 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:36:19.913124 | orchestrator | Saturday 01 November 2025 13:36:19 +0000 (0:00:04.125) 0:01:02.139 ***** 2025-11-01 13:36:19.913136 | orchestrator | =============================================================================== 2025-11-01 13:36:19.913147 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.34s 2025-11-01 13:36:19.913158 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.42s 2025-11-01 13:36:19.913169 | orchestrator | Get volume type local --------------------------------------------------- 8.31s 2025-11-01 13:36:19.913180 | orchestrator | Create volume type local ------------------------------------------------ 7.48s 2025-11-01 13:36:19.913191 | orchestrator | Set public network to default ------------------------------------------- 7.01s 2025-11-01 13:36:19.913202 | orchestrator | Create public network --------------------------------------------------- 6.18s 2025-11-01 13:36:19.913213 | orchestrator | Create public subnet ---------------------------------------------------- 4.87s 2025-11-01 13:36:19.913224 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.23s 2025-11-01 13:36:19.913235 | orchestrator | Create manager role ----------------------------------------------------- 4.13s 2025-11-01 13:36:19.913246 | orchestrator | Gathering Facts --------------------------------------------------------- 2.09s 2025-11-01 13:36:22.615875 | orchestrator | 2025-11-01 13:36:22 | INFO  | It takes a moment until task 7267f2ee-dd85-43dd-89be-241bcf3f0277 (image-manager) has been started and output is visible here. 2025-11-01 13:37:05.603858 | orchestrator | 2025-11-01 13:36:25 | INFO  | Processing image 'Cirros 0.6.2' 2025-11-01 13:37:05.603975 | orchestrator | 2025-11-01 13:36:25 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-11-01 13:37:05.603997 | orchestrator | 2025-11-01 13:36:25 | INFO  | Importing image Cirros 0.6.2 2025-11-01 13:37:05.604010 | orchestrator | 2025-11-01 13:36:25 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-11-01 13:37:05.604023 | orchestrator | 2025-11-01 13:36:28 | INFO  | Waiting for image to leave queued state... 2025-11-01 13:37:05.604035 | orchestrator | 2025-11-01 13:36:30 | INFO  | Waiting for import to complete... 2025-11-01 13:37:05.604046 | orchestrator | 2025-11-01 13:36:40 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-11-01 13:37:05.604057 | orchestrator | 2025-11-01 13:36:40 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-11-01 13:37:05.604068 | orchestrator | 2025-11-01 13:36:40 | INFO  | Setting internal_version = 0.6.2 2025-11-01 13:37:05.604080 | orchestrator | 2025-11-01 13:36:40 | INFO  | Setting image_original_user = cirros 2025-11-01 13:37:05.604091 | orchestrator | 2025-11-01 13:36:40 | INFO  | Adding tag os:cirros 2025-11-01 13:37:05.604103 | orchestrator | 2025-11-01 13:36:41 | INFO  | Setting property architecture: x86_64 2025-11-01 13:37:05.604114 | orchestrator | 2025-11-01 13:36:41 | INFO  | Setting property hw_disk_bus: scsi 2025-11-01 13:37:05.604124 | orchestrator | 2025-11-01 13:36:41 | INFO  | Setting property hw_rng_model: virtio 2025-11-01 13:37:05.604136 | orchestrator | 2025-11-01 13:36:42 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-11-01 13:37:05.604147 | orchestrator | 2025-11-01 13:36:42 | INFO  | Setting property hw_watchdog_action: reset 2025-11-01 13:37:05.604158 | orchestrator | 2025-11-01 13:36:42 | INFO  | Setting property hypervisor_type: qemu 2025-11-01 13:37:05.604169 | orchestrator | 2025-11-01 13:36:42 | INFO  | Setting property os_distro: cirros 2025-11-01 13:37:05.604180 | orchestrator | 2025-11-01 13:36:42 | INFO  | Setting property os_purpose: minimal 2025-11-01 13:37:05.604191 | orchestrator | 2025-11-01 13:36:43 | INFO  | Setting property replace_frequency: never 2025-11-01 13:37:05.604223 | orchestrator | 2025-11-01 13:36:43 | INFO  | Setting property uuid_validity: none 2025-11-01 13:37:05.604235 | orchestrator | 2025-11-01 13:36:43 | INFO  | Setting property provided_until: none 2025-11-01 13:37:05.604254 | orchestrator | 2025-11-01 13:36:43 | INFO  | Setting property image_description: Cirros 2025-11-01 13:37:05.604270 | orchestrator | 2025-11-01 13:36:44 | INFO  | Setting property image_name: Cirros 2025-11-01 13:37:05.604281 | orchestrator | 2025-11-01 13:36:44 | INFO  | Setting property internal_version: 0.6.2 2025-11-01 13:37:05.604292 | orchestrator | 2025-11-01 13:36:44 | INFO  | Setting property image_original_user: cirros 2025-11-01 13:37:05.604303 | orchestrator | 2025-11-01 13:36:44 | INFO  | Setting property os_version: 0.6.2 2025-11-01 13:37:05.604314 | orchestrator | 2025-11-01 13:36:44 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-11-01 13:37:05.604360 | orchestrator | 2025-11-01 13:36:45 | INFO  | Setting property image_build_date: 2023-05-30 2025-11-01 13:37:05.604372 | orchestrator | 2025-11-01 13:36:45 | INFO  | Checking status of 'Cirros 0.6.2' 2025-11-01 13:37:05.604384 | orchestrator | 2025-11-01 13:36:45 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-11-01 13:37:05.604396 | orchestrator | 2025-11-01 13:36:45 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-11-01 13:37:05.604409 | orchestrator | 2025-11-01 13:36:46 | INFO  | Processing image 'Cirros 0.6.3' 2025-11-01 13:37:05.604422 | orchestrator | 2025-11-01 13:36:46 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-11-01 13:37:05.604434 | orchestrator | 2025-11-01 13:36:46 | INFO  | Importing image Cirros 0.6.3 2025-11-01 13:37:05.604446 | orchestrator | 2025-11-01 13:36:46 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-11-01 13:37:05.604458 | orchestrator | 2025-11-01 13:36:47 | INFO  | Waiting for image to leave queued state... 2025-11-01 13:37:05.604471 | orchestrator | 2025-11-01 13:36:49 | INFO  | Waiting for import to complete... 2025-11-01 13:37:05.604501 | orchestrator | 2025-11-01 13:36:59 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-11-01 13:37:05.604515 | orchestrator | 2025-11-01 13:37:00 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-11-01 13:37:05.604527 | orchestrator | 2025-11-01 13:37:00 | INFO  | Setting internal_version = 0.6.3 2025-11-01 13:37:05.604539 | orchestrator | 2025-11-01 13:37:00 | INFO  | Setting image_original_user = cirros 2025-11-01 13:37:05.604552 | orchestrator | 2025-11-01 13:37:00 | INFO  | Adding tag os:cirros 2025-11-01 13:37:05.604564 | orchestrator | 2025-11-01 13:37:00 | INFO  | Setting property architecture: x86_64 2025-11-01 13:37:05.604577 | orchestrator | 2025-11-01 13:37:00 | INFO  | Setting property hw_disk_bus: scsi 2025-11-01 13:37:05.604589 | orchestrator | 2025-11-01 13:37:01 | INFO  | Setting property hw_rng_model: virtio 2025-11-01 13:37:05.604601 | orchestrator | 2025-11-01 13:37:01 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-11-01 13:37:05.604614 | orchestrator | 2025-11-01 13:37:01 | INFO  | Setting property hw_watchdog_action: reset 2025-11-01 13:37:05.604626 | orchestrator | 2025-11-01 13:37:01 | INFO  | Setting property hypervisor_type: qemu 2025-11-01 13:37:05.604637 | orchestrator | 2025-11-01 13:37:01 | INFO  | Setting property os_distro: cirros 2025-11-01 13:37:05.604659 | orchestrator | 2025-11-01 13:37:02 | INFO  | Setting property os_purpose: minimal 2025-11-01 13:37:05.604672 | orchestrator | 2025-11-01 13:37:02 | INFO  | Setting property replace_frequency: never 2025-11-01 13:37:05.604684 | orchestrator | 2025-11-01 13:37:02 | INFO  | Setting property uuid_validity: none 2025-11-01 13:37:05.604697 | orchestrator | 2025-11-01 13:37:02 | INFO  | Setting property provided_until: none 2025-11-01 13:37:05.604709 | orchestrator | 2025-11-01 13:37:03 | INFO  | Setting property image_description: Cirros 2025-11-01 13:37:05.604721 | orchestrator | 2025-11-01 13:37:03 | INFO  | Setting property image_name: Cirros 2025-11-01 13:37:05.604733 | orchestrator | 2025-11-01 13:37:03 | INFO  | Setting property internal_version: 0.6.3 2025-11-01 13:37:05.604744 | orchestrator | 2025-11-01 13:37:03 | INFO  | Setting property image_original_user: cirros 2025-11-01 13:37:05.604755 | orchestrator | 2025-11-01 13:37:04 | INFO  | Setting property os_version: 0.6.3 2025-11-01 13:37:05.604766 | orchestrator | 2025-11-01 13:37:04 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-11-01 13:37:05.604777 | orchestrator | 2025-11-01 13:37:04 | INFO  | Setting property image_build_date: 2024-09-26 2025-11-01 13:37:05.604794 | orchestrator | 2025-11-01 13:37:04 | INFO  | Checking status of 'Cirros 0.6.3' 2025-11-01 13:37:05.604805 | orchestrator | 2025-11-01 13:37:04 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-11-01 13:37:05.604816 | orchestrator | 2025-11-01 13:37:04 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-11-01 13:37:05.986160 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-11-01 13:37:08.448769 | orchestrator | 2025-11-01 13:37:08 | INFO  | date: 2025-11-01 2025-11-01 13:37:08.448867 | orchestrator | 2025-11-01 13:37:08 | INFO  | image: octavia-amphora-haproxy-2024.2.20251101.qcow2 2025-11-01 13:37:08.448885 | orchestrator | 2025-11-01 13:37:08 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251101.qcow2 2025-11-01 13:37:08.448899 | orchestrator | 2025-11-01 13:37:08 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251101.qcow2.CHECKSUM 2025-11-01 13:37:08.616298 | orchestrator | 2025-11-01 13:37:08 | INFO  | checksum: 665b63d55c855bb8158b5b9da75941485fad24fac81eb681f57aae95b3ea6c60 2025-11-01 13:37:08.702507 | orchestrator | 2025-11-01 13:37:08 | INFO  | It takes a moment until task c06e6f39-b663-40c4-b962-033d47bae371 (image-manager) has been started and output is visible here. 2025-11-01 13:38:21.009517 | orchestrator | 2025-11-01 13:37:11 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-11-01' 2025-11-01 13:38:21.009634 | orchestrator | 2025-11-01 13:37:11 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251101.qcow2: 200 2025-11-01 13:38:21.009652 | orchestrator | 2025-11-01 13:37:11 | INFO  | Importing image OpenStack Octavia Amphora 2025-11-01 2025-11-01 13:38:21.009666 | orchestrator | 2025-11-01 13:37:11 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251101.qcow2 2025-11-01 13:38:21.009677 | orchestrator | 2025-11-01 13:37:12 | INFO  | Waiting for image to leave queued state... 2025-11-01 13:38:21.009687 | orchestrator | 2025-11-01 13:37:14 | INFO  | Waiting for import to complete... 2025-11-01 13:38:21.009698 | orchestrator | 2025-11-01 13:37:24 | INFO  | Waiting for import to complete... 2025-11-01 13:38:21.009731 | orchestrator | 2025-11-01 13:37:34 | INFO  | Waiting for import to complete... 2025-11-01 13:38:21.009741 | orchestrator | 2025-11-01 13:37:45 | INFO  | Waiting for import to complete... 2025-11-01 13:38:21.009751 | orchestrator | 2025-11-01 13:37:55 | INFO  | Waiting for import to complete... 2025-11-01 13:38:21.009761 | orchestrator | 2025-11-01 13:38:05 | INFO  | Waiting for import to complete... 2025-11-01 13:38:21.009770 | orchestrator | 2025-11-01 13:38:15 | INFO  | Import of 'OpenStack Octavia Amphora 2025-11-01' successfully completed, reloading images 2025-11-01 13:38:21.009781 | orchestrator | 2025-11-01 13:38:15 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-11-01' 2025-11-01 13:38:21.009791 | orchestrator | 2025-11-01 13:38:15 | INFO  | Setting internal_version = 2025-11-01 2025-11-01 13:38:21.009801 | orchestrator | 2025-11-01 13:38:15 | INFO  | Setting image_original_user = ubuntu 2025-11-01 13:38:21.009811 | orchestrator | 2025-11-01 13:38:15 | INFO  | Adding tag amphora 2025-11-01 13:38:21.009821 | orchestrator | 2025-11-01 13:38:16 | INFO  | Adding tag os:ubuntu 2025-11-01 13:38:21.009831 | orchestrator | 2025-11-01 13:38:16 | INFO  | Setting property architecture: x86_64 2025-11-01 13:38:21.009841 | orchestrator | 2025-11-01 13:38:16 | INFO  | Setting property hw_disk_bus: scsi 2025-11-01 13:38:21.009850 | orchestrator | 2025-11-01 13:38:16 | INFO  | Setting property hw_rng_model: virtio 2025-11-01 13:38:21.009860 | orchestrator | 2025-11-01 13:38:17 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-11-01 13:38:21.009869 | orchestrator | 2025-11-01 13:38:17 | INFO  | Setting property hw_watchdog_action: reset 2025-11-01 13:38:21.009879 | orchestrator | 2025-11-01 13:38:17 | INFO  | Setting property hypervisor_type: qemu 2025-11-01 13:38:21.009904 | orchestrator | 2025-11-01 13:38:17 | INFO  | Setting property os_distro: ubuntu 2025-11-01 13:38:21.009914 | orchestrator | 2025-11-01 13:38:17 | INFO  | Setting property replace_frequency: quarterly 2025-11-01 13:38:21.009924 | orchestrator | 2025-11-01 13:38:18 | INFO  | Setting property uuid_validity: last-1 2025-11-01 13:38:21.009933 | orchestrator | 2025-11-01 13:38:18 | INFO  | Setting property provided_until: none 2025-11-01 13:38:21.009943 | orchestrator | 2025-11-01 13:38:18 | INFO  | Setting property os_purpose: network 2025-11-01 13:38:21.009952 | orchestrator | 2025-11-01 13:38:18 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-11-01 13:38:21.009962 | orchestrator | 2025-11-01 13:38:19 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-11-01 13:38:21.009972 | orchestrator | 2025-11-01 13:38:19 | INFO  | Setting property internal_version: 2025-11-01 2025-11-01 13:38:21.009982 | orchestrator | 2025-11-01 13:38:19 | INFO  | Setting property image_original_user: ubuntu 2025-11-01 13:38:21.009991 | orchestrator | 2025-11-01 13:38:19 | INFO  | Setting property os_version: 2025-11-01 2025-11-01 13:38:21.010001 | orchestrator | 2025-11-01 13:38:20 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20251101.qcow2 2025-11-01 13:38:21.010061 | orchestrator | 2025-11-01 13:38:20 | INFO  | Setting property image_build_date: 2025-11-01 2025-11-01 13:38:21.010074 | orchestrator | 2025-11-01 13:38:20 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-11-01' 2025-11-01 13:38:21.010085 | orchestrator | 2025-11-01 13:38:20 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-11-01' 2025-11-01 13:38:21.010115 | orchestrator | 2025-11-01 13:38:20 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-11-01 13:38:21.010134 | orchestrator | 2025-11-01 13:38:20 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-11-01 13:38:21.010147 | orchestrator | 2025-11-01 13:38:20 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-11-01 13:38:21.010159 | orchestrator | 2025-11-01 13:38:20 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-11-01 13:38:21.652120 | orchestrator | ok: Runtime: 0:03:26.590627 2025-11-01 13:38:21.676771 | 2025-11-01 13:38:21.676894 | TASK [Run checks] 2025-11-01 13:38:22.353889 | orchestrator | + set -e 2025-11-01 13:38:22.354101 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-01 13:38:22.354126 | orchestrator | ++ export INTERACTIVE=false 2025-11-01 13:38:22.354156 | orchestrator | ++ INTERACTIVE=false 2025-11-01 13:38:22.354169 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-01 13:38:22.354180 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-01 13:38:22.354192 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-11-01 13:38:22.355408 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-11-01 13:38:22.361812 | orchestrator | 2025-11-01 13:38:22.361840 | orchestrator | # CHECK 2025-11-01 13:38:22.361851 | orchestrator | 2025-11-01 13:38:22.361861 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-01 13:38:22.361875 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-01 13:38:22.361885 | orchestrator | + echo 2025-11-01 13:38:22.361894 | orchestrator | + echo '# CHECK' 2025-11-01 13:38:22.361904 | orchestrator | + echo 2025-11-01 13:38:22.361917 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-11-01 13:38:22.363164 | orchestrator | ++ semver latest 5.0.0 2025-11-01 13:38:22.426753 | orchestrator | 2025-11-01 13:38:22.426781 | orchestrator | ## Containers @ testbed-manager 2025-11-01 13:38:22.426790 | orchestrator | 2025-11-01 13:38:22.426801 | orchestrator | + [[ -1 -eq -1 ]] 2025-11-01 13:38:22.426811 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-01 13:38:22.426820 | orchestrator | + echo 2025-11-01 13:38:22.426831 | orchestrator | + echo '## Containers @ testbed-manager' 2025-11-01 13:38:22.426840 | orchestrator | + echo 2025-11-01 13:38:22.426850 | orchestrator | + osism container testbed-manager ps 2025-11-01 13:38:25.021253 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-11-01 13:38:25.021424 | orchestrator | ea172cbe6d2a registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes prometheus_blackbox_exporter 2025-11-01 13:38:25.021463 | orchestrator | 08cbe2116ac4 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes prometheus_alertmanager 2025-11-01 13:38:25.021476 | orchestrator | 2f819d963446 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes prometheus_cadvisor 2025-11-01 13:38:25.021495 | orchestrator | 8872d5dca3c1 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes prometheus_node_exporter 2025-11-01 13:38:25.021507 | orchestrator | 16193a661732 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes prometheus_server 2025-11-01 13:38:25.021523 | orchestrator | 7ad1921df898 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 28 minutes ago Up 27 minutes cephclient 2025-11-01 13:38:25.021535 | orchestrator | 8d09de676d17 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 41 minutes ago Up 41 minutes cron 2025-11-01 13:38:25.021547 | orchestrator | 45931cd72b2a registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 41 minutes ago Up 41 minutes kolla_toolbox 2025-11-01 13:38:25.021559 | orchestrator | 544591ddd9c6 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 42 minutes ago Up 42 minutes fluentd 2025-11-01 13:38:25.021594 | orchestrator | 136c932d743a phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 42 minutes ago Up 42 minutes (healthy) 80/tcp phpmyadmin 2025-11-01 13:38:25.021607 | orchestrator | 99cd30236a1b registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 43 minutes ago Up 43 minutes openstackclient 2025-11-01 13:38:25.021618 | orchestrator | b056fc56c1e5 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 43 minutes ago Up 43 minutes (healthy) 8080/tcp homer 2025-11-01 13:38:25.021630 | orchestrator | 06c7422bc00d registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" About an hour ago Up About an hour (healthy) 192.168.16.5:3128->3128/tcp squid 2025-11-01 13:38:25.021642 | orchestrator | 477880dcfc72 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" About an hour ago Up 50 minutes (healthy) manager-inventory_reconciler-1 2025-11-01 13:38:25.021653 | orchestrator | 9c1aee0da3a9 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" About an hour ago Up 51 minutes (healthy) osism-kubernetes 2025-11-01 13:38:25.021683 | orchestrator | 0b3012d2207e registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" About an hour ago Up 51 minutes (healthy) osism-ansible 2025-11-01 13:38:25.021701 | orchestrator | bf4b4fd3d2e7 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" About an hour ago Up 51 minutes (healthy) kolla-ansible 2025-11-01 13:38:25.021713 | orchestrator | 08f3f9402da5 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" About an hour ago Up 51 minutes (healthy) ceph-ansible 2025-11-01 13:38:25.021724 | orchestrator | eed5de287ace registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" About an hour ago Up 51 minutes (healthy) 8000/tcp manager-ara-server-1 2025-11-01 13:38:25.021736 | orchestrator | 99990a8ee4aa registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 51 minutes (healthy) manager-beat-1 2025-11-01 13:38:25.021747 | orchestrator | cea69e15c2e1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 51 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-11-01 13:38:25.021758 | orchestrator | 8698b725ebcf registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" About an hour ago Up 51 minutes (healthy) osismclient 2025-11-01 13:38:25.021769 | orchestrator | 4c70932eb65f registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 51 minutes (healthy) manager-flower-1 2025-11-01 13:38:25.021788 | orchestrator | 993b39f90996 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" About an hour ago Up 51 minutes (healthy) 3306/tcp manager-mariadb-1 2025-11-01 13:38:25.021799 | orchestrator | 333be8df1ee3 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" About an hour ago Up 51 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2025-11-01 13:38:25.021810 | orchestrator | a12d502edc88 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 51 minutes (healthy) manager-listener-1 2025-11-01 13:38:25.021822 | orchestrator | 83786076c133 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 51 minutes (healthy) manager-openstack-1 2025-11-01 13:38:25.021833 | orchestrator | 3aac352044b2 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" About an hour ago Up 51 minutes (healthy) 6379/tcp manager-redis-1 2025-11-01 13:38:25.021845 | orchestrator | 808bc0d97cc6 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-11-01 13:38:25.466847 | orchestrator | 2025-11-01 13:38:25.466933 | orchestrator | ## Images @ testbed-manager 2025-11-01 13:38:25.466945 | orchestrator | 2025-11-01 13:38:25.466956 | orchestrator | + echo 2025-11-01 13:38:25.466966 | orchestrator | + echo '## Images @ testbed-manager' 2025-11-01 13:38:25.466977 | orchestrator | + echo 2025-11-01 13:38:25.466986 | orchestrator | + osism container testbed-manager images 2025-11-01 13:38:27.946699 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-11-01 13:38:27.946814 | orchestrator | registry.osism.tech/osism/homer v25.10.1 97ec70bd825b 10 hours ago 11.5MB 2025-11-01 13:38:27.946831 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 8ba85c7431b1 10 hours ago 236MB 2025-11-01 13:38:27.946842 | orchestrator | registry.osism.tech/osism/cephclient reef ff95829428ad 10 hours ago 453MB 2025-11-01 13:38:27.946854 | orchestrator | registry.osism.tech/kolla/cron 2024.2 eaa73375e046 12 hours ago 267MB 2025-11-01 13:38:27.946883 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ce8a1ccf9781 12 hours ago 580MB 2025-11-01 13:38:27.946895 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 d0c82a0ec65c 12 hours ago 671MB 2025-11-01 13:38:27.946905 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 11fbf30a3486 12 hours ago 309MB 2025-11-01 13:38:27.946916 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 d674d1069289 12 hours ago 307MB 2025-11-01 13:38:27.946927 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 8b134aefa73f 12 hours ago 840MB 2025-11-01 13:38:27.946938 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 1478fa905298 12 hours ago 358MB 2025-11-01 13:38:27.946949 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 b8f395f83943 12 hours ago 405MB 2025-11-01 13:38:27.946960 | orchestrator | registry.osism.tech/osism/osism-ansible latest fb95637d6084 13 hours ago 597MB 2025-11-01 13:38:27.946970 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 f9cd9a3567f2 13 hours ago 593MB 2025-11-01 13:38:27.946981 | orchestrator | registry.osism.tech/osism/ceph-ansible reef f76c3643e07b 14 hours ago 545MB 2025-11-01 13:38:27.947020 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest b83a70ae01c7 14 hours ago 1.21GB 2025-11-01 13:38:27.947034 | orchestrator | registry.osism.tech/osism/osism-frontend latest de8d3d001e53 14 hours ago 238MB 2025-11-01 13:38:27.947045 | orchestrator | registry.osism.tech/osism/osism latest 2ee25247ce5a 14 hours ago 323MB 2025-11-01 13:38:27.947055 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 5651c69c70d7 14 hours ago 316MB 2025-11-01 13:38:27.947066 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 3 weeks ago 742MB 2025-11-01 13:38:27.947077 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 2 months ago 275MB 2025-11-01 13:38:27.947087 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.3 ea44c9edeacf 2 months ago 329MB 2025-11-01 13:38:27.947098 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 3 months ago 226MB 2025-11-01 13:38:27.947109 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine f218e591b571 3 months ago 41.4MB 2025-11-01 13:38:27.947120 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 16 months ago 146MB 2025-11-01 13:38:28.352292 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-11-01 13:38:28.352986 | orchestrator | ++ semver latest 5.0.0 2025-11-01 13:38:28.412392 | orchestrator | 2025-11-01 13:38:28.412418 | orchestrator | ## Containers @ testbed-node-0 2025-11-01 13:38:28.412430 | orchestrator | 2025-11-01 13:38:28.412440 | orchestrator | + [[ -1 -eq -1 ]] 2025-11-01 13:38:28.412451 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-01 13:38:28.412462 | orchestrator | + echo 2025-11-01 13:38:28.412473 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-11-01 13:38:28.412484 | orchestrator | + echo 2025-11-01 13:38:28.412494 | orchestrator | + osism container testbed-node-0 ps 2025-11-01 13:38:31.108124 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-11-01 13:38:31.110295 | orchestrator | 48fe1a0a834f registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) octavia_worker 2025-11-01 13:38:31.110343 | orchestrator | 251046fb9a5c registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) octavia_housekeeping 2025-11-01 13:38:31.110355 | orchestrator | aeea6bb5f269 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) octavia_health_manager 2025-11-01 13:38:31.110365 | orchestrator | 94a8f7bc4a10 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes octavia_driver_agent 2025-11-01 13:38:31.110374 | orchestrator | bc8d95345206 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) octavia_api 2025-11-01 13:38:31.110384 | orchestrator | a7a04af8da58 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes grafana 2025-11-01 13:38:31.110393 | orchestrator | a8f0e7212a2d registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) magnum_conductor 2025-11-01 13:38:31.110403 | orchestrator | a36f7f70a11a registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) magnum_api 2025-11-01 13:38:31.110428 | orchestrator | 738b6ca3d744 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) placement_api 2025-11-01 13:38:31.110456 | orchestrator | eff5e6c1c35a registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) designate_worker 2025-11-01 13:38:31.110467 | orchestrator | 31060f491678 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) nova_novncproxy 2025-11-01 13:38:31.110476 | orchestrator | b1c9203be534 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) designate_mdns 2025-11-01 13:38:31.110486 | orchestrator | 2231fac95a79 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) designate_producer 2025-11-01 13:38:31.110495 | orchestrator | 9ca3a4540187 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 19 minutes ago Up 17 minutes (healthy) nova_conductor 2025-11-01 13:38:31.110505 | orchestrator | 5da66a3e6182 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) neutron_server 2025-11-01 13:38:31.110514 | orchestrator | 8b263641cd2e registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) designate_central 2025-11-01 13:38:31.110523 | orchestrator | a04a031f7ce6 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) designate_api 2025-11-01 13:38:31.110533 | orchestrator | 87ad69b8f30d registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) designate_backend_bind9 2025-11-01 13:38:31.110543 | orchestrator | 40d398fca731 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) barbican_worker 2025-11-01 13:38:31.110552 | orchestrator | e2381dae8c0c registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) barbican_keystone_listener 2025-11-01 13:38:31.110561 | orchestrator | 6257480d4fbb registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) nova_api 2025-11-01 13:38:31.110571 | orchestrator | 610bdaf3b599 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) barbican_api 2025-11-01 13:38:31.110581 | orchestrator | 78e41d99429a registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 22 minutes ago Up 17 minutes (healthy) nova_scheduler 2025-11-01 13:38:31.110600 | orchestrator | 930391ce0912 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) glance_api 2025-11-01 13:38:31.110610 | orchestrator | 25c2443e8c3a registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) cinder_scheduler 2025-11-01 13:38:31.110620 | orchestrator | 31993071400f registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes prometheus_elasticsearch_exporter 2025-11-01 13:38:31.110632 | orchestrator | 683d8b9b02ce registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes prometheus_cadvisor 2025-11-01 13:38:31.110642 | orchestrator | b65f32d504d2 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) cinder_api 2025-11-01 13:38:31.110652 | orchestrator | b7b4a334ace8 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes prometheus_memcached_exporter 2025-11-01 13:38:31.110672 | orchestrator | d6e83d89a163 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes prometheus_mysqld_exporter 2025-11-01 13:38:31.110682 | orchestrator | fbae3338be3d registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes prometheus_node_exporter 2025-11-01 13:38:31.110691 | orchestrator | 1b33a8025a4f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 26 minutes ago Up 26 minutes ceph-mgr-testbed-node-0 2025-11-01 13:38:31.110701 | orchestrator | 4af304640d53 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) keystone 2025-11-01 13:38:31.110710 | orchestrator | d90654439966 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) keystone_fernet 2025-11-01 13:38:31.110720 | orchestrator | 15b18a669231 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) keystone_ssh 2025-11-01 13:38:31.110730 | orchestrator | 522d9ba4a35d registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) horizon 2025-11-01 13:38:31.110739 | orchestrator | 0c9ff325871c registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 30 minutes ago Up 30 minutes (healthy) mariadb 2025-11-01 13:38:31.110748 | orchestrator | 32f7cda439e3 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) opensearch_dashboards 2025-11-01 13:38:31.110758 | orchestrator | ae657896329b registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) opensearch 2025-11-01 13:38:31.110768 | orchestrator | c819c45c3ff3 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 34 minutes ago Up 34 minutes keepalived 2025-11-01 13:38:31.110777 | orchestrator | 119a401dd445 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 34 minutes ago Up 34 minutes ceph-crash-testbed-node-0 2025-11-01 13:38:31.110790 | orchestrator | ceac1a8e7880 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) proxysql 2025-11-01 13:38:31.110800 | orchestrator | 69940f996b31 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) haproxy 2025-11-01 13:38:31.110810 | orchestrator | f444e890ab5e registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 38 minutes ago Up 38 minutes ovn_northd 2025-11-01 13:38:31.110819 | orchestrator | 9733d2ce73f3 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 38 minutes ago Up 38 minutes ovn_sb_db 2025-11-01 13:38:31.110829 | orchestrator | ed8790afc27f registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 38 minutes ago Up 38 minutes ovn_nb_db 2025-11-01 13:38:31.110849 | orchestrator | 3d723180d991 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 39 minutes ago Up 39 minutes ovn_controller 2025-11-01 13:38:31.110859 | orchestrator | 8021d360b8e1 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 39 minutes ago Up 39 minutes ceph-mon-testbed-node-0 2025-11-01 13:38:31.110873 | orchestrator | 8af9febe3a72 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) rabbitmq 2025-11-01 13:38:31.110883 | orchestrator | 863ce4cc6ddf registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) openvswitch_vswitchd 2025-11-01 13:38:31.110892 | orchestrator | bbd9b9341d5f registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) openvswitch_db 2025-11-01 13:38:31.110902 | orchestrator | 6b8de0f80022 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) redis_sentinel 2025-11-01 13:38:31.110911 | orchestrator | a7ffc896bf59 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) redis 2025-11-01 13:38:31.110921 | orchestrator | a619ef5c03ed registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) memcached 2025-11-01 13:38:31.110930 | orchestrator | 2223aa67d3d9 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 41 minutes ago Up 41 minutes cron 2025-11-01 13:38:31.110940 | orchestrator | 7521b89b58d0 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 42 minutes ago Up 42 minutes kolla_toolbox 2025-11-01 13:38:31.110949 | orchestrator | a191154cb395 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 42 minutes ago Up 42 minutes fluentd 2025-11-01 13:38:31.492250 | orchestrator | 2025-11-01 13:38:31.492342 | orchestrator | ## Images @ testbed-node-0 2025-11-01 13:38:31.492355 | orchestrator | 2025-11-01 13:38:31.492365 | orchestrator | + echo 2025-11-01 13:38:31.492375 | orchestrator | + echo '## Images @ testbed-node-0' 2025-11-01 13:38:31.492385 | orchestrator | + echo 2025-11-01 13:38:31.492394 | orchestrator | + osism container testbed-node-0 images 2025-11-01 13:38:34.171785 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-11-01 13:38:34.171897 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 44f898f2e9b3 10 hours ago 1.27GB 2025-11-01 13:38:34.171914 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 a500281cdb12 12 hours ago 394MB 2025-11-01 13:38:34.171926 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 3370d7cde2dc 12 hours ago 1GB 2025-11-01 13:38:34.171937 | orchestrator | registry.osism.tech/kolla/cron 2024.2 eaa73375e046 12 hours ago 267MB 2025-11-01 13:38:34.171948 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ce8a1ccf9781 12 hours ago 580MB 2025-11-01 13:38:34.171959 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 3d25be654b17 12 hours ago 275MB 2025-11-01 13:38:34.171970 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 f2eec3862497 12 hours ago 278MB 2025-11-01 13:38:34.171980 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 c73bff14dba3 12 hours ago 324MB 2025-11-01 13:38:34.171991 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 909a2a43a28b 12 hours ago 1.51GB 2025-11-01 13:38:34.172003 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 308dc86a2ead 12 hours ago 1.54GB 2025-11-01 13:38:34.172014 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 d0c82a0ec65c 12 hours ago 671MB 2025-11-01 13:38:34.172025 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 85001988322c 12 hours ago 267MB 2025-11-01 13:38:34.172035 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 155e1f38ae11 12 hours ago 449MB 2025-11-01 13:38:34.172046 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 eb0992b53bde 12 hours ago 293MB 2025-11-01 13:38:34.172080 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 d674d1069289 12 hours ago 307MB 2025-11-01 13:38:34.172092 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 01facc12637c 12 hours ago 302MB 2025-11-01 13:38:34.172102 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 1478fa905298 12 hours ago 358MB 2025-11-01 13:38:34.172113 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 b5ab78f4ac8f 12 hours ago 300MB 2025-11-01 13:38:34.172124 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 0e66e32d606a 12 hours ago 1.15GB 2025-11-01 13:38:34.172134 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 d5aa73840e10 12 hours ago 274MB 2025-11-01 13:38:34.172144 | orchestrator | registry.osism.tech/kolla/redis 2024.2 c4b13aebd387 12 hours ago 274MB 2025-11-01 13:38:34.172174 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 0fc4ae24ea0e 12 hours ago 280MB 2025-11-01 13:38:34.172185 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 aa9fcdd9c97e 12 hours ago 280MB 2025-11-01 13:38:34.172196 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 765dfa6912f5 12 hours ago 977MB 2025-11-01 13:38:34.172207 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 57373846f77c 12 hours ago 990MB 2025-11-01 13:38:34.172218 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 6ac3d58ce359 12 hours ago 986MB 2025-11-01 13:38:34.172228 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 f11ee4a81645 12 hours ago 985MB 2025-11-01 13:38:34.172239 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 a262eee2014d 12 hours ago 986MB 2025-11-01 13:38:34.172250 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 b21b61bdd6f3 12 hours ago 986MB 2025-11-01 13:38:34.172260 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 b387cac5b723 12 hours ago 990MB 2025-11-01 13:38:34.172271 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 fc49ef6499ee 12 hours ago 1.1GB 2025-11-01 13:38:34.172282 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c7ad4d47bf86 12 hours ago 992MB 2025-11-01 13:38:34.172292 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 99f2c51e3b82 12 hours ago 991MB 2025-11-01 13:38:34.172303 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 0ea7bd5c41df 12 hours ago 992MB 2025-11-01 13:38:34.172313 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 c544fb2ad50e 12 hours ago 1.16GB 2025-11-01 13:38:34.172345 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 3aed99d10b53 12 hours ago 1.4GB 2025-11-01 13:38:34.172383 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 989948c1427d 12 hours ago 1.4GB 2025-11-01 13:38:34.172395 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 dff5bfd72644 12 hours ago 975MB 2025-11-01 13:38:34.172406 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 0ffe873f9568 12 hours ago 975MB 2025-11-01 13:38:34.172416 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 3b0fa753bca7 12 hours ago 975MB 2025-11-01 13:38:34.172427 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 ffc5292b586f 12 hours ago 974MB 2025-11-01 13:38:34.172438 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 508223a03f2a 12 hours ago 1.13GB 2025-11-01 13:38:34.172449 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 38bab00d6975 12 hours ago 1.24GB 2025-11-01 13:38:34.172470 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 bae5a17370e4 12 hours ago 1.04GB 2025-11-01 13:38:34.172481 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f333080e0f8b 12 hours ago 1.09GB 2025-11-01 13:38:34.172492 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 b9d44a046aa8 12 hours ago 1.04GB 2025-11-01 13:38:34.172502 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 6e37d25838f3 12 hours ago 978MB 2025-11-01 13:38:34.172513 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 820bdc545666 12 hours ago 977MB 2025-11-01 13:38:34.172524 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 ee77aaf10e87 12 hours ago 1.05GB 2025-11-01 13:38:34.172534 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 a160aa4e4243 12 hours ago 991MB 2025-11-01 13:38:34.172551 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 ad98f2eeffe7 12 hours ago 1.05GB 2025-11-01 13:38:34.172562 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 c3d1f1672d13 12 hours ago 1.03GB 2025-11-01 13:38:34.172572 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 178f20b77a9f 12 hours ago 1.05GB 2025-11-01 13:38:34.172583 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 82f8f3aaed83 12 hours ago 1.03GB 2025-11-01 13:38:34.172593 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 4131f85c01c8 12 hours ago 1.03GB 2025-11-01 13:38:34.172604 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 bd151c181d88 12 hours ago 1.21GB 2025-11-01 13:38:34.172615 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 b1a4ecd753b1 12 hours ago 1.21GB 2025-11-01 13:38:34.172625 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 95866e00ca40 12 hours ago 1.37GB 2025-11-01 13:38:34.172636 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 39fb338b4a2c 12 hours ago 1.21GB 2025-11-01 13:38:34.172647 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 421bf1ebb80e 12 hours ago 841MB 2025-11-01 13:38:34.172657 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 30d4f67a1924 12 hours ago 841MB 2025-11-01 13:38:34.172668 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 822cd9e7eadc 12 hours ago 841MB 2025-11-01 13:38:34.172679 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 7c4750c0829d 12 hours ago 841MB 2025-11-01 13:38:34.540860 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-11-01 13:38:34.541275 | orchestrator | ++ semver latest 5.0.0 2025-11-01 13:38:34.606876 | orchestrator | 2025-11-01 13:38:34.606907 | orchestrator | ## Containers @ testbed-node-1 2025-11-01 13:38:34.606919 | orchestrator | 2025-11-01 13:38:34.606930 | orchestrator | + [[ -1 -eq -1 ]] 2025-11-01 13:38:34.606940 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-01 13:38:34.606951 | orchestrator | + echo 2025-11-01 13:38:34.606962 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-11-01 13:38:34.606973 | orchestrator | + echo 2025-11-01 13:38:34.606984 | orchestrator | + osism container testbed-node-1 ps 2025-11-01 13:38:37.220978 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-11-01 13:38:37.221131 | orchestrator | a0cdac4c3dd4 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) octavia_worker 2025-11-01 13:38:37.221150 | orchestrator | d73b10daa458 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) octavia_housekeeping 2025-11-01 13:38:37.221163 | orchestrator | 65538f9da465 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) octavia_health_manager 2025-11-01 13:38:37.221202 | orchestrator | 3e839ceb3527 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes octavia_driver_agent 2025-11-01 13:38:37.221214 | orchestrator | 9f43c164a21c registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) octavia_api 2025-11-01 13:38:37.221225 | orchestrator | 591bd2db78de registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes grafana 2025-11-01 13:38:37.221235 | orchestrator | 597af5878314 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) magnum_conductor 2025-11-01 13:38:37.221246 | orchestrator | 3d7453a74082 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) magnum_api 2025-11-01 13:38:37.221257 | orchestrator | 3c496697b219 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) placement_api 2025-11-01 13:38:37.221283 | orchestrator | 76209db56c40 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) designate_worker 2025-11-01 13:38:37.221295 | orchestrator | 794884eca4b5 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) nova_novncproxy 2025-11-01 13:38:37.221305 | orchestrator | 1e54b74bd71e registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) designate_mdns 2025-11-01 13:38:37.221316 | orchestrator | 7a0c984f09c1 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) neutron_server 2025-11-01 13:38:37.221369 | orchestrator | 54d28b80c1fe registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) designate_producer 2025-11-01 13:38:37.221381 | orchestrator | be9a01f5931a registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 19 minutes ago Up 17 minutes (healthy) nova_conductor 2025-11-01 13:38:37.221392 | orchestrator | 88dc1dc937e2 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) designate_central 2025-11-01 13:38:37.221403 | orchestrator | 61469659e260 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) designate_api 2025-11-01 13:38:37.221414 | orchestrator | 0ba2a738551d registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) designate_backend_bind9 2025-11-01 13:38:37.221425 | orchestrator | 6fb48cf82800 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) barbican_worker 2025-11-01 13:38:37.221436 | orchestrator | 3bd2d94b1a3c registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) barbican_keystone_listener 2025-11-01 13:38:37.221447 | orchestrator | ece7f5c4d797 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) nova_api 2025-11-01 13:38:37.221474 | orchestrator | 35c51451858c registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) barbican_api 2025-11-01 13:38:37.221485 | orchestrator | c15d0858dfff registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 21 minutes ago Up 17 minutes (healthy) nova_scheduler 2025-11-01 13:38:37.221504 | orchestrator | ed95291ee3ef registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) glance_api 2025-11-01 13:38:37.221515 | orchestrator | 87938658c519 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) cinder_scheduler 2025-11-01 13:38:37.221526 | orchestrator | e0881e05cb5e registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) cinder_api 2025-11-01 13:38:37.221536 | orchestrator | db2d574b8d21 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes prometheus_elasticsearch_exporter 2025-11-01 13:38:37.221547 | orchestrator | 95d5f4090d6c registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes prometheus_cadvisor 2025-11-01 13:38:37.221558 | orchestrator | a1ac0462abca registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes prometheus_memcached_exporter 2025-11-01 13:38:37.221569 | orchestrator | 2916a2b3d77a registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes prometheus_mysqld_exporter 2025-11-01 13:38:37.221579 | orchestrator | d7486d2c867c registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes prometheus_node_exporter 2025-11-01 13:38:37.221598 | orchestrator | 9925f34ff104 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 26 minutes ago Up 26 minutes ceph-mgr-testbed-node-1 2025-11-01 13:38:37.221609 | orchestrator | 845aa595ab2e registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) keystone 2025-11-01 13:38:37.221620 | orchestrator | 97526b0acda7 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) horizon 2025-11-01 13:38:37.221630 | orchestrator | 3343e3093a7d registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) keystone_fernet 2025-11-01 13:38:37.221641 | orchestrator | fc99c1c130cd registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) keystone_ssh 2025-11-01 13:38:37.221651 | orchestrator | f26f652c5877 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) opensearch_dashboards 2025-11-01 13:38:37.221662 | orchestrator | 98d4384b0dfa registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 32 minutes ago Up 32 minutes (healthy) mariadb 2025-11-01 13:38:37.221673 | orchestrator | 60ac51e92aed registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) opensearch 2025-11-01 13:38:37.221683 | orchestrator | 71292a19ed54 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 34 minutes ago Up 34 minutes keepalived 2025-11-01 13:38:37.221694 | orchestrator | 88f62bc60270 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 34 minutes ago Up 34 minutes ceph-crash-testbed-node-1 2025-11-01 13:38:37.221704 | orchestrator | e650d5426da7 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) proxysql 2025-11-01 13:38:37.221721 | orchestrator | 87c1fe06b652 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) haproxy 2025-11-01 13:38:37.221732 | orchestrator | 71d4c8225905 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 38 minutes ago Up 37 minutes ovn_northd 2025-11-01 13:38:37.221752 | orchestrator | b62067a6144b registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 38 minutes ago Up 37 minutes ovn_sb_db 2025-11-01 13:38:37.221764 | orchestrator | 5fd7093bf407 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 38 minutes ago Up 38 minutes ovn_nb_db 2025-11-01 13:38:37.221775 | orchestrator | 83376e8cdf77 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 38 minutes ago Up 38 minutes ovn_controller 2025-11-01 13:38:37.221785 | orchestrator | 24c658a41b6b registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) rabbitmq 2025-11-01 13:38:37.221796 | orchestrator | 85fba22a757f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 39 minutes ago Up 39 minutes ceph-mon-testbed-node-1 2025-11-01 13:38:37.221806 | orchestrator | 71678642ac1b registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) openvswitch_vswitchd 2025-11-01 13:38:37.221817 | orchestrator | 1b279dc796ca registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) openvswitch_db 2025-11-01 13:38:37.221828 | orchestrator | 541b997d5ce3 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) redis_sentinel 2025-11-01 13:38:37.221838 | orchestrator | 1281e1481d46 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) redis 2025-11-01 13:38:37.221849 | orchestrator | c1c8c17d5c82 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) memcached 2025-11-01 13:38:37.221859 | orchestrator | c1184b70688b registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 41 minutes ago Up 41 minutes cron 2025-11-01 13:38:37.221870 | orchestrator | 9930fac500ca registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 41 minutes ago Up 41 minutes kolla_toolbox 2025-11-01 13:38:37.221885 | orchestrator | 042630b1e6c5 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 42 minutes ago Up 42 minutes fluentd 2025-11-01 13:38:37.607509 | orchestrator | 2025-11-01 13:38:37.607589 | orchestrator | ## Images @ testbed-node-1 2025-11-01 13:38:37.607603 | orchestrator | 2025-11-01 13:38:37.607616 | orchestrator | + echo 2025-11-01 13:38:37.607627 | orchestrator | + echo '## Images @ testbed-node-1' 2025-11-01 13:38:37.607638 | orchestrator | + echo 2025-11-01 13:38:37.607649 | orchestrator | + osism container testbed-node-1 images 2025-11-01 13:38:40.255531 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-11-01 13:38:40.255629 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 44f898f2e9b3 10 hours ago 1.27GB 2025-11-01 13:38:40.255643 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 a500281cdb12 12 hours ago 394MB 2025-11-01 13:38:40.255654 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 3370d7cde2dc 12 hours ago 1GB 2025-11-01 13:38:40.255664 | orchestrator | registry.osism.tech/kolla/cron 2024.2 eaa73375e046 12 hours ago 267MB 2025-11-01 13:38:40.255700 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ce8a1ccf9781 12 hours ago 580MB 2025-11-01 13:38:40.255711 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 f2eec3862497 12 hours ago 278MB 2025-11-01 13:38:40.255720 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 3d25be654b17 12 hours ago 275MB 2025-11-01 13:38:40.255730 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 c73bff14dba3 12 hours ago 324MB 2025-11-01 13:38:40.255740 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 909a2a43a28b 12 hours ago 1.51GB 2025-11-01 13:38:40.255749 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 308dc86a2ead 12 hours ago 1.54GB 2025-11-01 13:38:40.255759 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 d0c82a0ec65c 12 hours ago 671MB 2025-11-01 13:38:40.255768 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 85001988322c 12 hours ago 267MB 2025-11-01 13:38:40.255778 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 155e1f38ae11 12 hours ago 449MB 2025-11-01 13:38:40.255787 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 eb0992b53bde 12 hours ago 293MB 2025-11-01 13:38:40.255797 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 d674d1069289 12 hours ago 307MB 2025-11-01 13:38:40.255806 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 01facc12637c 12 hours ago 302MB 2025-11-01 13:38:40.255816 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 1478fa905298 12 hours ago 358MB 2025-11-01 13:38:40.255825 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 b5ab78f4ac8f 12 hours ago 300MB 2025-11-01 13:38:40.255835 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 0e66e32d606a 12 hours ago 1.15GB 2025-11-01 13:38:40.255844 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 d5aa73840e10 12 hours ago 274MB 2025-11-01 13:38:40.255854 | orchestrator | registry.osism.tech/kolla/redis 2024.2 c4b13aebd387 12 hours ago 274MB 2025-11-01 13:38:40.255863 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 0fc4ae24ea0e 12 hours ago 280MB 2025-11-01 13:38:40.255873 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 aa9fcdd9c97e 12 hours ago 280MB 2025-11-01 13:38:40.255882 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 765dfa6912f5 12 hours ago 977MB 2025-11-01 13:38:40.255892 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 57373846f77c 12 hours ago 990MB 2025-11-01 13:38:40.255901 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 6ac3d58ce359 12 hours ago 986MB 2025-11-01 13:38:40.255911 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 f11ee4a81645 12 hours ago 985MB 2025-11-01 13:38:40.255920 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 a262eee2014d 12 hours ago 986MB 2025-11-01 13:38:40.255929 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 b21b61bdd6f3 12 hours ago 986MB 2025-11-01 13:38:40.255939 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 b387cac5b723 12 hours ago 990MB 2025-11-01 13:38:40.255948 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 fc49ef6499ee 12 hours ago 1.1GB 2025-11-01 13:38:40.255958 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c7ad4d47bf86 12 hours ago 992MB 2025-11-01 13:38:40.255967 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 99f2c51e3b82 12 hours ago 991MB 2025-11-01 13:38:40.255977 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 0ea7bd5c41df 12 hours ago 992MB 2025-11-01 13:38:40.255992 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 c544fb2ad50e 12 hours ago 1.16GB 2025-11-01 13:38:40.256002 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 3aed99d10b53 12 hours ago 1.4GB 2025-11-01 13:38:40.256028 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 989948c1427d 12 hours ago 1.4GB 2025-11-01 13:38:40.256039 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 508223a03f2a 12 hours ago 1.13GB 2025-11-01 13:38:40.256048 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 38bab00d6975 12 hours ago 1.24GB 2025-11-01 13:38:40.256058 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 bae5a17370e4 12 hours ago 1.04GB 2025-11-01 13:38:40.256067 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f333080e0f8b 12 hours ago 1.09GB 2025-11-01 13:38:40.256078 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 b9d44a046aa8 12 hours ago 1.04GB 2025-11-01 13:38:40.256089 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 ad98f2eeffe7 12 hours ago 1.05GB 2025-11-01 13:38:40.256116 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 c3d1f1672d13 12 hours ago 1.03GB 2025-11-01 13:38:40.256127 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 178f20b77a9f 12 hours ago 1.05GB 2025-11-01 13:38:40.256138 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 82f8f3aaed83 12 hours ago 1.03GB 2025-11-01 13:38:40.256149 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 4131f85c01c8 12 hours ago 1.03GB 2025-11-01 13:38:40.256160 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 bd151c181d88 12 hours ago 1.21GB 2025-11-01 13:38:40.256171 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 b1a4ecd753b1 12 hours ago 1.21GB 2025-11-01 13:38:40.256182 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 95866e00ca40 12 hours ago 1.37GB 2025-11-01 13:38:40.256192 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 39fb338b4a2c 12 hours ago 1.21GB 2025-11-01 13:38:40.256203 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 421bf1ebb80e 12 hours ago 841MB 2025-11-01 13:38:40.256213 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 30d4f67a1924 12 hours ago 841MB 2025-11-01 13:38:40.256224 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 822cd9e7eadc 12 hours ago 841MB 2025-11-01 13:38:40.256235 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 7c4750c0829d 12 hours ago 841MB 2025-11-01 13:38:40.646873 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-11-01 13:38:40.647625 | orchestrator | ++ semver latest 5.0.0 2025-11-01 13:38:40.717741 | orchestrator | 2025-11-01 13:38:40.717778 | orchestrator | ## Containers @ testbed-node-2 2025-11-01 13:38:40.717789 | orchestrator | 2025-11-01 13:38:40.717799 | orchestrator | + [[ -1 -eq -1 ]] 2025-11-01 13:38:40.717809 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-01 13:38:40.717818 | orchestrator | + echo 2025-11-01 13:38:40.717829 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-11-01 13:38:40.717839 | orchestrator | + echo 2025-11-01 13:38:40.717849 | orchestrator | + osism container testbed-node-2 ps 2025-11-01 13:38:43.374413 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-11-01 13:38:43.374494 | orchestrator | 2cf5cfda3407 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) octavia_worker 2025-11-01 13:38:43.374505 | orchestrator | 07aa8b131106 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) octavia_housekeeping 2025-11-01 13:38:43.374535 | orchestrator | 7a56a950c3f4 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) octavia_health_manager 2025-11-01 13:38:43.374545 | orchestrator | 0f7886df9e85 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes octavia_driver_agent 2025-11-01 13:38:43.374554 | orchestrator | 04afe0f43463 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) octavia_api 2025-11-01 13:38:43.374564 | orchestrator | 0ad864bb5a2b registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes grafana 2025-11-01 13:38:43.374574 | orchestrator | 642d649762e7 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) magnum_conductor 2025-11-01 13:38:43.374584 | orchestrator | e41011879ad3 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) magnum_api 2025-11-01 13:38:43.374593 | orchestrator | 2d71ccb3f993 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) placement_api 2025-11-01 13:38:43.374603 | orchestrator | 5a01da1f543d registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) designate_worker 2025-11-01 13:38:43.374612 | orchestrator | de59549a159c registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) nova_novncproxy 2025-11-01 13:38:43.374622 | orchestrator | b39517933362 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) designate_mdns 2025-11-01 13:38:43.374631 | orchestrator | 1540c09587b1 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) neutron_server 2025-11-01 13:38:43.374641 | orchestrator | 3eab5ecdfcb9 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) designate_producer 2025-11-01 13:38:43.374650 | orchestrator | 9013b68c16ad registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 19 minutes ago Up 17 minutes (healthy) nova_conductor 2025-11-01 13:38:43.374660 | orchestrator | 6aaa7b13346d registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) designate_central 2025-11-01 13:38:43.374669 | orchestrator | bbc7c5f0f2da registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) designate_api 2025-11-01 13:38:43.374679 | orchestrator | 3f8654da6b52 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) designate_backend_bind9 2025-11-01 13:38:43.374688 | orchestrator | 4c37bde245cf registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) barbican_worker 2025-11-01 13:38:43.374698 | orchestrator | 9a4f41b8a1c5 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) barbican_keystone_listener 2025-11-01 13:38:43.374707 | orchestrator | 223b56c86737 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) nova_api 2025-11-01 13:38:43.374756 | orchestrator | ac7976fb1e6f registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) barbican_api 2025-11-01 13:38:43.374780 | orchestrator | 9d1fb3d070a9 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 21 minutes ago Up 17 minutes (healthy) nova_scheduler 2025-11-01 13:38:43.374790 | orchestrator | a91e9bbcee9a registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) glance_api 2025-11-01 13:38:43.374800 | orchestrator | e3c99f676041 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) cinder_scheduler 2025-11-01 13:38:43.374809 | orchestrator | 24b820dc84a4 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) cinder_api 2025-11-01 13:38:43.374819 | orchestrator | c9a52099cc95 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes prometheus_elasticsearch_exporter 2025-11-01 13:38:43.374829 | orchestrator | 292c82e22060 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes prometheus_cadvisor 2025-11-01 13:38:43.374879 | orchestrator | b7e1a9f4473f registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes prometheus_memcached_exporter 2025-11-01 13:38:43.374891 | orchestrator | af703f778e62 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes prometheus_mysqld_exporter 2025-11-01 13:38:43.374901 | orchestrator | 46545336223d registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes prometheus_node_exporter 2025-11-01 13:38:43.374910 | orchestrator | a9df297960ba registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 26 minutes ago Up 26 minutes ceph-mgr-testbed-node-2 2025-11-01 13:38:43.374919 | orchestrator | 2043c3a9fbdd registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) keystone 2025-11-01 13:38:43.374929 | orchestrator | 583055471a24 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) horizon 2025-11-01 13:38:43.374938 | orchestrator | 0382257d90a7 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) keystone_fernet 2025-11-01 13:38:43.374948 | orchestrator | e360efe1bc9e registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) keystone_ssh 2025-11-01 13:38:43.374957 | orchestrator | b6ba324d6960 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) opensearch_dashboards 2025-11-01 13:38:43.374969 | orchestrator | 98d1ee459379 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 31 minutes ago Up 31 minutes (healthy) mariadb 2025-11-01 13:38:43.374980 | orchestrator | eca8933e2f17 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) opensearch 2025-11-01 13:38:43.374992 | orchestrator | 1996372741cc registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 34 minutes ago Up 34 minutes keepalived 2025-11-01 13:38:43.375003 | orchestrator | 1093a62e0131 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 34 minutes ago Up 34 minutes ceph-crash-testbed-node-2 2025-11-01 13:38:43.375020 | orchestrator | bf07931fb28c registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) proxysql 2025-11-01 13:38:43.375031 | orchestrator | 27ec8687ddb7 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) haproxy 2025-11-01 13:38:43.375042 | orchestrator | 382cfa39aca5 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 38 minutes ago Up 37 minutes ovn_northd 2025-11-01 13:38:43.375059 | orchestrator | 107647c025ea registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 38 minutes ago Up 38 minutes ovn_sb_db 2025-11-01 13:38:43.375071 | orchestrator | 31c1209c453e registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 38 minutes ago Up 38 minutes ovn_nb_db 2025-11-01 13:38:43.375082 | orchestrator | 6ea626412c71 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) rabbitmq 2025-11-01 13:38:43.375093 | orchestrator | 7cffcffa966d registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 38 minutes ago Up 38 minutes ovn_controller 2025-11-01 13:38:43.375103 | orchestrator | c592970708c9 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 39 minutes ago Up 39 minutes ceph-mon-testbed-node-2 2025-11-01 13:38:43.375114 | orchestrator | a58304ceb049 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) openvswitch_vswitchd 2025-11-01 13:38:43.375125 | orchestrator | ef5c7c298e65 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 41 minutes ago Up 40 minutes (healthy) openvswitch_db 2025-11-01 13:38:43.375136 | orchestrator | 81e806491393 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) redis_sentinel 2025-11-01 13:38:43.375147 | orchestrator | c593f3ca8645 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) redis 2025-11-01 13:38:43.375157 | orchestrator | bf964c9cf331 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) memcached 2025-11-01 13:38:43.375169 | orchestrator | 624be9b93c6c registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 41 minutes ago Up 41 minutes cron 2025-11-01 13:38:43.375180 | orchestrator | 9cf6018934e0 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 42 minutes ago Up 42 minutes kolla_toolbox 2025-11-01 13:38:43.375249 | orchestrator | 0c750877beea registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 42 minutes ago Up 42 minutes fluentd 2025-11-01 13:38:43.757269 | orchestrator | 2025-11-01 13:38:43.757368 | orchestrator | ## Images @ testbed-node-2 2025-11-01 13:38:43.757381 | orchestrator | 2025-11-01 13:38:43.757392 | orchestrator | + echo 2025-11-01 13:38:43.757401 | orchestrator | + echo '## Images @ testbed-node-2' 2025-11-01 13:38:43.757412 | orchestrator | + echo 2025-11-01 13:38:43.757421 | orchestrator | + osism container testbed-node-2 images 2025-11-01 13:38:46.492772 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-11-01 13:38:46.492870 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 44f898f2e9b3 10 hours ago 1.27GB 2025-11-01 13:38:46.492884 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 a500281cdb12 12 hours ago 394MB 2025-11-01 13:38:46.492894 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 3370d7cde2dc 12 hours ago 1GB 2025-11-01 13:38:46.492925 | orchestrator | registry.osism.tech/kolla/cron 2024.2 eaa73375e046 12 hours ago 267MB 2025-11-01 13:38:46.492935 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ce8a1ccf9781 12 hours ago 580MB 2025-11-01 13:38:46.492945 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 f2eec3862497 12 hours ago 278MB 2025-11-01 13:38:46.492954 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 3d25be654b17 12 hours ago 275MB 2025-11-01 13:38:46.492963 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 c73bff14dba3 12 hours ago 324MB 2025-11-01 13:38:46.492973 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 308dc86a2ead 12 hours ago 1.54GB 2025-11-01 13:38:46.492982 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 909a2a43a28b 12 hours ago 1.51GB 2025-11-01 13:38:46.492991 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 d0c82a0ec65c 12 hours ago 671MB 2025-11-01 13:38:46.493001 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 85001988322c 12 hours ago 267MB 2025-11-01 13:38:46.493010 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 155e1f38ae11 12 hours ago 449MB 2025-11-01 13:38:46.493019 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 eb0992b53bde 12 hours ago 293MB 2025-11-01 13:38:46.493029 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 d674d1069289 12 hours ago 307MB 2025-11-01 13:38:46.493038 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 01facc12637c 12 hours ago 302MB 2025-11-01 13:38:46.493062 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 1478fa905298 12 hours ago 358MB 2025-11-01 13:38:46.493072 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 b5ab78f4ac8f 12 hours ago 300MB 2025-11-01 13:38:46.493081 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 0e66e32d606a 12 hours ago 1.15GB 2025-11-01 13:38:46.493091 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 d5aa73840e10 12 hours ago 274MB 2025-11-01 13:38:46.493100 | orchestrator | registry.osism.tech/kolla/redis 2024.2 c4b13aebd387 12 hours ago 274MB 2025-11-01 13:38:46.493109 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 0fc4ae24ea0e 12 hours ago 280MB 2025-11-01 13:38:46.493118 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 aa9fcdd9c97e 12 hours ago 280MB 2025-11-01 13:38:46.493128 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 765dfa6912f5 12 hours ago 977MB 2025-11-01 13:38:46.493137 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 57373846f77c 12 hours ago 990MB 2025-11-01 13:38:46.493146 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 6ac3d58ce359 12 hours ago 986MB 2025-11-01 13:38:46.493155 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 f11ee4a81645 12 hours ago 985MB 2025-11-01 13:38:46.493165 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 a262eee2014d 12 hours ago 986MB 2025-11-01 13:38:46.493174 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 b21b61bdd6f3 12 hours ago 986MB 2025-11-01 13:38:46.493184 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 b387cac5b723 12 hours ago 990MB 2025-11-01 13:38:46.493193 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 fc49ef6499ee 12 hours ago 1.1GB 2025-11-01 13:38:46.493202 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c7ad4d47bf86 12 hours ago 992MB 2025-11-01 13:38:46.493212 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 99f2c51e3b82 12 hours ago 991MB 2025-11-01 13:38:46.493229 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 0ea7bd5c41df 12 hours ago 992MB 2025-11-01 13:38:46.493238 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 c544fb2ad50e 12 hours ago 1.16GB 2025-11-01 13:38:46.493248 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 3aed99d10b53 12 hours ago 1.4GB 2025-11-01 13:38:46.493273 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 989948c1427d 12 hours ago 1.4GB 2025-11-01 13:38:46.493284 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 508223a03f2a 12 hours ago 1.13GB 2025-11-01 13:38:46.493293 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 38bab00d6975 12 hours ago 1.24GB 2025-11-01 13:38:46.493304 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 bae5a17370e4 12 hours ago 1.04GB 2025-11-01 13:38:46.493314 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f333080e0f8b 12 hours ago 1.09GB 2025-11-01 13:38:46.493353 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 b9d44a046aa8 12 hours ago 1.04GB 2025-11-01 13:38:46.493365 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 ad98f2eeffe7 12 hours ago 1.05GB 2025-11-01 13:38:46.493375 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 c3d1f1672d13 12 hours ago 1.03GB 2025-11-01 13:38:46.493386 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 178f20b77a9f 12 hours ago 1.05GB 2025-11-01 13:38:46.493397 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 82f8f3aaed83 12 hours ago 1.03GB 2025-11-01 13:38:46.493407 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 4131f85c01c8 12 hours ago 1.03GB 2025-11-01 13:38:46.493418 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 bd151c181d88 12 hours ago 1.21GB 2025-11-01 13:38:46.493429 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 b1a4ecd753b1 12 hours ago 1.21GB 2025-11-01 13:38:46.493439 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 95866e00ca40 12 hours ago 1.37GB 2025-11-01 13:38:46.493450 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 39fb338b4a2c 12 hours ago 1.21GB 2025-11-01 13:38:46.493461 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 421bf1ebb80e 12 hours ago 841MB 2025-11-01 13:38:46.493471 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 30d4f67a1924 12 hours ago 841MB 2025-11-01 13:38:46.493482 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 822cd9e7eadc 12 hours ago 841MB 2025-11-01 13:38:46.493493 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 7c4750c0829d 12 hours ago 841MB 2025-11-01 13:38:46.874291 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-11-01 13:38:46.881227 | orchestrator | + set -e 2025-11-01 13:38:46.881258 | orchestrator | + source /opt/manager-vars.sh 2025-11-01 13:38:46.882182 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-01 13:38:46.882204 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-01 13:38:46.882215 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-01 13:38:46.882226 | orchestrator | ++ CEPH_VERSION=reef 2025-11-01 13:38:46.882237 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-01 13:38:46.882248 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-01 13:38:46.882258 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-01 13:38:46.882269 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-01 13:38:46.882280 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-01 13:38:46.882290 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-01 13:38:46.882301 | orchestrator | ++ export ARA=false 2025-11-01 13:38:46.882311 | orchestrator | ++ ARA=false 2025-11-01 13:38:46.882350 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-01 13:38:46.882367 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-01 13:38:46.882403 | orchestrator | ++ export TEMPEST=false 2025-11-01 13:38:46.882415 | orchestrator | ++ TEMPEST=false 2025-11-01 13:38:46.882425 | orchestrator | ++ export IS_ZUUL=true 2025-11-01 13:38:46.882436 | orchestrator | ++ IS_ZUUL=true 2025-11-01 13:38:46.882447 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-11-01 13:38:46.882457 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-11-01 13:38:46.882468 | orchestrator | ++ export EXTERNAL_API=false 2025-11-01 13:38:46.882478 | orchestrator | ++ EXTERNAL_API=false 2025-11-01 13:38:46.882489 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-01 13:38:46.882499 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-01 13:38:46.882510 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-01 13:38:46.882521 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-01 13:38:46.882531 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-01 13:38:46.882542 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-01 13:38:46.882569 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-11-01 13:38:46.882580 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-11-01 13:38:46.894114 | orchestrator | + set -e 2025-11-01 13:38:46.894140 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-01 13:38:46.894151 | orchestrator | ++ export INTERACTIVE=false 2025-11-01 13:38:46.894162 | orchestrator | ++ INTERACTIVE=false 2025-11-01 13:38:46.894172 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-01 13:38:46.894183 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-01 13:38:46.894367 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-11-01 13:38:46.896343 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-11-01 13:38:46.903097 | orchestrator | 2025-11-01 13:38:46.903120 | orchestrator | # Ceph status 2025-11-01 13:38:46.903132 | orchestrator | 2025-11-01 13:38:46.903144 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-01 13:38:46.903155 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-01 13:38:46.903167 | orchestrator | + echo 2025-11-01 13:38:46.903178 | orchestrator | + echo '# Ceph status' 2025-11-01 13:38:46.903190 | orchestrator | + echo 2025-11-01 13:38:46.903201 | orchestrator | + ceph -s 2025-11-01 13:38:47.534490 | orchestrator | cluster: 2025-11-01 13:38:47.534584 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-11-01 13:38:47.534599 | orchestrator | health: HEALTH_OK 2025-11-01 13:38:47.534611 | orchestrator | 2025-11-01 13:38:47.534622 | orchestrator | services: 2025-11-01 13:38:47.534632 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 39m) 2025-11-01 13:38:47.534645 | orchestrator | mgr: testbed-node-0(active, since 26m), standbys: testbed-node-1, testbed-node-2 2025-11-01 13:38:47.534657 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-11-01 13:38:47.534667 | orchestrator | osd: 6 osds: 6 up (since 35m), 6 in (since 36m) 2025-11-01 13:38:47.534678 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-11-01 13:38:47.534689 | orchestrator | 2025-11-01 13:38:47.534700 | orchestrator | data: 2025-11-01 13:38:47.534710 | orchestrator | volumes: 1/1 healthy 2025-11-01 13:38:47.534721 | orchestrator | pools: 14 pools, 401 pgs 2025-11-01 13:38:47.534732 | orchestrator | objects: 523 objects, 2.2 GiB 2025-11-01 13:38:47.534743 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-11-01 13:38:47.534753 | orchestrator | pgs: 401 active+clean 2025-11-01 13:38:47.534764 | orchestrator | 2025-11-01 13:38:47.583642 | orchestrator | 2025-11-01 13:38:47.583698 | orchestrator | # Ceph versions 2025-11-01 13:38:47.583710 | orchestrator | 2025-11-01 13:38:47.583729 | orchestrator | + echo 2025-11-01 13:38:47.583748 | orchestrator | + echo '# Ceph versions' 2025-11-01 13:38:47.583767 | orchestrator | + echo 2025-11-01 13:38:47.583785 | orchestrator | + ceph versions 2025-11-01 13:38:48.211251 | orchestrator | { 2025-11-01 13:38:48.211388 | orchestrator | "mon": { 2025-11-01 13:38:48.211404 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-11-01 13:38:48.211417 | orchestrator | }, 2025-11-01 13:38:48.211428 | orchestrator | "mgr": { 2025-11-01 13:38:48.211439 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-11-01 13:38:48.211449 | orchestrator | }, 2025-11-01 13:38:48.211460 | orchestrator | "osd": { 2025-11-01 13:38:48.211471 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-11-01 13:38:48.211482 | orchestrator | }, 2025-11-01 13:38:48.211492 | orchestrator | "mds": { 2025-11-01 13:38:48.211503 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-11-01 13:38:48.211513 | orchestrator | }, 2025-11-01 13:38:48.211548 | orchestrator | "rgw": { 2025-11-01 13:38:48.211559 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-11-01 13:38:48.211570 | orchestrator | }, 2025-11-01 13:38:48.211581 | orchestrator | "overall": { 2025-11-01 13:38:48.211592 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-11-01 13:38:48.211603 | orchestrator | } 2025-11-01 13:38:48.211613 | orchestrator | } 2025-11-01 13:38:48.256530 | orchestrator | 2025-11-01 13:38:48.256594 | orchestrator | # Ceph OSD tree 2025-11-01 13:38:48.256607 | orchestrator | 2025-11-01 13:38:48.256618 | orchestrator | + echo 2025-11-01 13:38:48.256629 | orchestrator | + echo '# Ceph OSD tree' 2025-11-01 13:38:48.256641 | orchestrator | + echo 2025-11-01 13:38:48.256651 | orchestrator | + ceph osd df tree 2025-11-01 13:38:48.809413 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-11-01 13:38:48.809500 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 442 MiB 113 GiB 5.92 1.00 - root default 2025-11-01 13:38:48.809514 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 147 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-11-01 13:38:48.809525 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.6 GiB 1.5 GiB 1 KiB 74 MiB 18 GiB 7.80 1.32 200 up osd.0 2025-11-01 13:38:48.809536 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 828 MiB 754 MiB 1 KiB 74 MiB 19 GiB 4.05 0.68 190 up osd.4 2025-11-01 13:38:48.809547 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 147 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-11-01 13:38:48.809558 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 972 MiB 899 MiB 1 KiB 74 MiB 19 GiB 4.75 0.80 176 up osd.1 2025-11-01 13:38:48.809569 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 7.10 1.20 216 up osd.3 2025-11-01 13:38:48.809580 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 147 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-11-01 13:38:48.809602 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.39 1.08 191 up osd.2 2025-11-01 13:38:48.809613 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 74 MiB 19 GiB 5.46 0.92 197 up osd.5 2025-11-01 13:38:48.809624 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 442 MiB 113 GiB 5.92 2025-11-01 13:38:48.809635 | orchestrator | MIN/MAX VAR: 0.68/1.32 STDDEV: 1.31 2025-11-01 13:38:48.868603 | orchestrator | 2025-11-01 13:38:48.868653 | orchestrator | # Ceph monitor status 2025-11-01 13:38:48.868666 | orchestrator | 2025-11-01 13:38:48.868678 | orchestrator | + echo 2025-11-01 13:38:48.868689 | orchestrator | + echo '# Ceph monitor status' 2025-11-01 13:38:48.868701 | orchestrator | + echo 2025-11-01 13:38:48.868712 | orchestrator | + ceph mon stat 2025-11-01 13:38:49.532159 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 10, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-11-01 13:38:49.590589 | orchestrator | 2025-11-01 13:38:49.590642 | orchestrator | # Ceph quorum status 2025-11-01 13:38:49.590654 | orchestrator | 2025-11-01 13:38:49.590664 | orchestrator | + echo 2025-11-01 13:38:49.590674 | orchestrator | + echo '# Ceph quorum status' 2025-11-01 13:38:49.590684 | orchestrator | + echo 2025-11-01 13:38:49.591418 | orchestrator | + ceph quorum_status 2025-11-01 13:38:49.591437 | orchestrator | + jq 2025-11-01 13:38:50.277069 | orchestrator | { 2025-11-01 13:38:50.277126 | orchestrator | "election_epoch": 10, 2025-11-01 13:38:50.277139 | orchestrator | "quorum": [ 2025-11-01 13:38:50.277151 | orchestrator | 0, 2025-11-01 13:38:50.277162 | orchestrator | 1, 2025-11-01 13:38:50.277172 | orchestrator | 2 2025-11-01 13:38:50.277183 | orchestrator | ], 2025-11-01 13:38:50.277193 | orchestrator | "quorum_names": [ 2025-11-01 13:38:50.277203 | orchestrator | "testbed-node-0", 2025-11-01 13:38:50.277232 | orchestrator | "testbed-node-1", 2025-11-01 13:38:50.277242 | orchestrator | "testbed-node-2" 2025-11-01 13:38:50.277253 | orchestrator | ], 2025-11-01 13:38:50.277263 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-11-01 13:38:50.277275 | orchestrator | "quorum_age": 2357, 2025-11-01 13:38:50.277285 | orchestrator | "features": { 2025-11-01 13:38:50.277296 | orchestrator | "quorum_con": "4540138322906710015", 2025-11-01 13:38:50.277306 | orchestrator | "quorum_mon": [ 2025-11-01 13:38:50.277316 | orchestrator | "kraken", 2025-11-01 13:38:50.277401 | orchestrator | "luminous", 2025-11-01 13:38:50.277460 | orchestrator | "mimic", 2025-11-01 13:38:50.277471 | orchestrator | "osdmap-prune", 2025-11-01 13:38:50.277482 | orchestrator | "nautilus", 2025-11-01 13:38:50.277493 | orchestrator | "octopus", 2025-11-01 13:38:50.277504 | orchestrator | "pacific", 2025-11-01 13:38:50.277515 | orchestrator | "elector-pinging", 2025-11-01 13:38:50.277525 | orchestrator | "quincy", 2025-11-01 13:38:50.277536 | orchestrator | "reef" 2025-11-01 13:38:50.277546 | orchestrator | ] 2025-11-01 13:38:50.277557 | orchestrator | }, 2025-11-01 13:38:50.277568 | orchestrator | "monmap": { 2025-11-01 13:38:50.277578 | orchestrator | "epoch": 1, 2025-11-01 13:38:50.277589 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-11-01 13:38:50.277600 | orchestrator | "modified": "2025-11-01T12:59:03.578734Z", 2025-11-01 13:38:50.277610 | orchestrator | "created": "2025-11-01T12:59:03.578734Z", 2025-11-01 13:38:50.277621 | orchestrator | "min_mon_release": 18, 2025-11-01 13:38:50.277631 | orchestrator | "min_mon_release_name": "reef", 2025-11-01 13:38:50.277642 | orchestrator | "election_strategy": 1, 2025-11-01 13:38:50.277654 | orchestrator | "disallowed_leaders: ": "", 2025-11-01 13:38:50.277666 | orchestrator | "stretch_mode": false, 2025-11-01 13:38:50.277678 | orchestrator | "tiebreaker_mon": "", 2025-11-01 13:38:50.277690 | orchestrator | "removed_ranks: ": "", 2025-11-01 13:38:50.277701 | orchestrator | "features": { 2025-11-01 13:38:50.277713 | orchestrator | "persistent": [ 2025-11-01 13:38:50.277725 | orchestrator | "kraken", 2025-11-01 13:38:50.277738 | orchestrator | "luminous", 2025-11-01 13:38:50.277749 | orchestrator | "mimic", 2025-11-01 13:38:50.277762 | orchestrator | "osdmap-prune", 2025-11-01 13:38:50.277773 | orchestrator | "nautilus", 2025-11-01 13:38:50.277786 | orchestrator | "octopus", 2025-11-01 13:38:50.277798 | orchestrator | "pacific", 2025-11-01 13:38:50.277810 | orchestrator | "elector-pinging", 2025-11-01 13:38:50.277821 | orchestrator | "quincy", 2025-11-01 13:38:50.277833 | orchestrator | "reef" 2025-11-01 13:38:50.277845 | orchestrator | ], 2025-11-01 13:38:50.277857 | orchestrator | "optional": [] 2025-11-01 13:38:50.277869 | orchestrator | }, 2025-11-01 13:38:50.277881 | orchestrator | "mons": [ 2025-11-01 13:38:50.277893 | orchestrator | { 2025-11-01 13:38:50.277906 | orchestrator | "rank": 0, 2025-11-01 13:38:50.277918 | orchestrator | "name": "testbed-node-0", 2025-11-01 13:38:50.277930 | orchestrator | "public_addrs": { 2025-11-01 13:38:50.277942 | orchestrator | "addrvec": [ 2025-11-01 13:38:50.277954 | orchestrator | { 2025-11-01 13:38:50.277965 | orchestrator | "type": "v2", 2025-11-01 13:38:50.277977 | orchestrator | "addr": "192.168.16.10:3300", 2025-11-01 13:38:50.277989 | orchestrator | "nonce": 0 2025-11-01 13:38:50.278001 | orchestrator | }, 2025-11-01 13:38:50.278012 | orchestrator | { 2025-11-01 13:38:50.278069 | orchestrator | "type": "v1", 2025-11-01 13:38:50.278081 | orchestrator | "addr": "192.168.16.10:6789", 2025-11-01 13:38:50.278092 | orchestrator | "nonce": 0 2025-11-01 13:38:50.278102 | orchestrator | } 2025-11-01 13:38:50.278113 | orchestrator | ] 2025-11-01 13:38:50.278123 | orchestrator | }, 2025-11-01 13:38:50.278134 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-11-01 13:38:50.278145 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-11-01 13:38:50.278155 | orchestrator | "priority": 0, 2025-11-01 13:38:50.278166 | orchestrator | "weight": 0, 2025-11-01 13:38:50.278176 | orchestrator | "crush_location": "{}" 2025-11-01 13:38:50.278187 | orchestrator | }, 2025-11-01 13:38:50.278198 | orchestrator | { 2025-11-01 13:38:50.278208 | orchestrator | "rank": 1, 2025-11-01 13:38:50.278219 | orchestrator | "name": "testbed-node-1", 2025-11-01 13:38:50.278229 | orchestrator | "public_addrs": { 2025-11-01 13:38:50.278240 | orchestrator | "addrvec": [ 2025-11-01 13:38:50.278250 | orchestrator | { 2025-11-01 13:38:50.278261 | orchestrator | "type": "v2", 2025-11-01 13:38:50.278271 | orchestrator | "addr": "192.168.16.11:3300", 2025-11-01 13:38:50.278291 | orchestrator | "nonce": 0 2025-11-01 13:38:50.278301 | orchestrator | }, 2025-11-01 13:38:50.278312 | orchestrator | { 2025-11-01 13:38:50.278342 | orchestrator | "type": "v1", 2025-11-01 13:38:50.278353 | orchestrator | "addr": "192.168.16.11:6789", 2025-11-01 13:38:50.278364 | orchestrator | "nonce": 0 2025-11-01 13:38:50.278374 | orchestrator | } 2025-11-01 13:38:50.278385 | orchestrator | ] 2025-11-01 13:38:50.278396 | orchestrator | }, 2025-11-01 13:38:50.278406 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-11-01 13:38:50.278417 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-11-01 13:38:50.278427 | orchestrator | "priority": 0, 2025-11-01 13:38:50.278438 | orchestrator | "weight": 0, 2025-11-01 13:38:50.278448 | orchestrator | "crush_location": "{}" 2025-11-01 13:38:50.278459 | orchestrator | }, 2025-11-01 13:38:50.278469 | orchestrator | { 2025-11-01 13:38:50.278480 | orchestrator | "rank": 2, 2025-11-01 13:38:50.278490 | orchestrator | "name": "testbed-node-2", 2025-11-01 13:38:50.278501 | orchestrator | "public_addrs": { 2025-11-01 13:38:50.278511 | orchestrator | "addrvec": [ 2025-11-01 13:38:50.278522 | orchestrator | { 2025-11-01 13:38:50.278532 | orchestrator | "type": "v2", 2025-11-01 13:38:50.278543 | orchestrator | "addr": "192.168.16.12:3300", 2025-11-01 13:38:50.278553 | orchestrator | "nonce": 0 2025-11-01 13:38:50.278564 | orchestrator | }, 2025-11-01 13:38:50.278575 | orchestrator | { 2025-11-01 13:38:50.278585 | orchestrator | "type": "v1", 2025-11-01 13:38:50.278596 | orchestrator | "addr": "192.168.16.12:6789", 2025-11-01 13:38:50.278606 | orchestrator | "nonce": 0 2025-11-01 13:38:50.278617 | orchestrator | } 2025-11-01 13:38:50.278628 | orchestrator | ] 2025-11-01 13:38:50.278638 | orchestrator | }, 2025-11-01 13:38:50.278649 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-11-01 13:38:50.278659 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-11-01 13:38:50.278670 | orchestrator | "priority": 0, 2025-11-01 13:38:50.278680 | orchestrator | "weight": 0, 2025-11-01 13:38:50.278691 | orchestrator | "crush_location": "{}" 2025-11-01 13:38:50.278702 | orchestrator | } 2025-11-01 13:38:50.278712 | orchestrator | ] 2025-11-01 13:38:50.278723 | orchestrator | } 2025-11-01 13:38:50.278734 | orchestrator | } 2025-11-01 13:38:50.278755 | orchestrator | 2025-11-01 13:38:50.278767 | orchestrator | # Ceph free space status 2025-11-01 13:38:50.278777 | orchestrator | 2025-11-01 13:38:50.278788 | orchestrator | + echo 2025-11-01 13:38:50.278799 | orchestrator | + echo '# Ceph free space status' 2025-11-01 13:38:50.278809 | orchestrator | + echo 2025-11-01 13:38:50.278820 | orchestrator | + ceph df 2025-11-01 13:38:50.939110 | orchestrator | --- RAW STORAGE --- 2025-11-01 13:38:50.939209 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-11-01 13:38:50.939239 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-11-01 13:38:50.939253 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-11-01 13:38:50.939264 | orchestrator | 2025-11-01 13:38:50.939276 | orchestrator | --- POOLS --- 2025-11-01 13:38:50.939288 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-11-01 13:38:50.939300 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2025-11-01 13:38:50.939311 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-11-01 13:38:50.939367 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-11-01 13:38:50.939380 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-11-01 13:38:50.939392 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-11-01 13:38:50.939403 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-11-01 13:38:50.939414 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-11-01 13:38:50.939425 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-11-01 13:38:50.939450 | orchestrator | .rgw.root 9 32 3.5 KiB 7 56 KiB 0 52 GiB 2025-11-01 13:38:50.939462 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-11-01 13:38:50.939473 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-11-01 13:38:50.939484 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.99 35 GiB 2025-11-01 13:38:50.939519 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-11-01 13:38:50.939531 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-11-01 13:38:51.000540 | orchestrator | ++ semver latest 5.0.0 2025-11-01 13:38:51.058232 | orchestrator | + [[ -1 -eq -1 ]] 2025-11-01 13:38:51.058271 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-11-01 13:38:51.058284 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-11-01 13:38:51.058295 | orchestrator | + osism apply facts 2025-11-01 13:39:03.402477 | orchestrator | 2025-11-01 13:39:03 | INFO  | Task 17107c8f-34fb-4d95-b22a-e077d318925a (facts) was prepared for execution. 2025-11-01 13:39:03.402574 | orchestrator | 2025-11-01 13:39:03 | INFO  | It takes a moment until task 17107c8f-34fb-4d95-b22a-e077d318925a (facts) has been started and output is visible here. 2025-11-01 13:39:18.024997 | orchestrator | 2025-11-01 13:39:18.025115 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-11-01 13:39:18.025133 | orchestrator | 2025-11-01 13:39:18.025146 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-11-01 13:39:18.025158 | orchestrator | Saturday 01 November 2025 13:39:08 +0000 (0:00:00.302) 0:00:00.302 ***** 2025-11-01 13:39:18.025169 | orchestrator | ok: [testbed-manager] 2025-11-01 13:39:18.025181 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:39:18.025192 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:39:18.025203 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:39:18.025213 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:39:18.025224 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:39:18.025235 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:39:18.025246 | orchestrator | 2025-11-01 13:39:18.025257 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-11-01 13:39:18.025268 | orchestrator | Saturday 01 November 2025 13:39:10 +0000 (0:00:01.614) 0:00:01.917 ***** 2025-11-01 13:39:18.025278 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:39:18.025290 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:18.025301 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:39:18.025312 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:39:18.025322 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:39:18.025381 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:39:18.025392 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:39:18.025403 | orchestrator | 2025-11-01 13:39:18.025414 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-11-01 13:39:18.025424 | orchestrator | 2025-11-01 13:39:18.025435 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-11-01 13:39:18.025446 | orchestrator | Saturday 01 November 2025 13:39:11 +0000 (0:00:01.543) 0:00:03.460 ***** 2025-11-01 13:39:18.025457 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:39:18.025468 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:39:18.025479 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:39:18.025489 | orchestrator | ok: [testbed-manager] 2025-11-01 13:39:18.025500 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:39:18.025511 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:39:18.025521 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:39:18.025532 | orchestrator | 2025-11-01 13:39:18.025545 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-11-01 13:39:18.025557 | orchestrator | 2025-11-01 13:39:18.025571 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-11-01 13:39:18.025583 | orchestrator | Saturday 01 November 2025 13:39:16 +0000 (0:00:05.305) 0:00:08.766 ***** 2025-11-01 13:39:18.025596 | orchestrator | skipping: [testbed-manager] 2025-11-01 13:39:18.025608 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:18.025621 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:39:18.025634 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:39:18.025646 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:39:18.025659 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:39:18.025697 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:39:18.025711 | orchestrator | 2025-11-01 13:39:18.025723 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:39:18.025737 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:39:18.025750 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:39:18.025763 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:39:18.025791 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:39:18.025804 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:39:18.025816 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:39:18.025829 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:39:18.025841 | orchestrator | 2025-11-01 13:39:18.025853 | orchestrator | 2025-11-01 13:39:18.025866 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:39:18.025879 | orchestrator | Saturday 01 November 2025 13:39:17 +0000 (0:00:00.622) 0:00:09.388 ***** 2025-11-01 13:39:18.025892 | orchestrator | =============================================================================== 2025-11-01 13:39:18.025905 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.31s 2025-11-01 13:39:18.025917 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.61s 2025-11-01 13:39:18.025928 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.54s 2025-11-01 13:39:18.025939 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2025-11-01 13:39:18.479441 | orchestrator | + osism validate ceph-mons 2025-11-01 13:39:53.580443 | orchestrator | 2025-11-01 13:39:53.580530 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-11-01 13:39:53.580540 | orchestrator | 2025-11-01 13:39:53.580546 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-11-01 13:39:53.580554 | orchestrator | Saturday 01 November 2025 13:39:36 +0000 (0:00:00.508) 0:00:00.508 ***** 2025-11-01 13:39:53.580560 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 13:39:53.580567 | orchestrator | 2025-11-01 13:39:53.580573 | orchestrator | TASK [Create report output directory] ****************************************** 2025-11-01 13:39:53.580579 | orchestrator | Saturday 01 November 2025 13:39:37 +0000 (0:00:00.891) 0:00:01.400 ***** 2025-11-01 13:39:53.580586 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 13:39:53.580592 | orchestrator | 2025-11-01 13:39:53.580598 | orchestrator | TASK [Define report vars] ****************************************************** 2025-11-01 13:39:53.580604 | orchestrator | Saturday 01 November 2025 13:39:38 +0000 (0:00:01.123) 0:00:02.524 ***** 2025-11-01 13:39:53.580610 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:39:53.580617 | orchestrator | 2025-11-01 13:39:53.580623 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-11-01 13:39:53.580629 | orchestrator | Saturday 01 November 2025 13:39:38 +0000 (0:00:00.139) 0:00:02.663 ***** 2025-11-01 13:39:53.580635 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:39:53.580642 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:39:53.580648 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:39:53.580654 | orchestrator | 2025-11-01 13:39:53.580660 | orchestrator | TASK [Get container info] ****************************************************** 2025-11-01 13:39:53.580666 | orchestrator | Saturday 01 November 2025 13:39:38 +0000 (0:00:00.349) 0:00:03.013 ***** 2025-11-01 13:39:53.580690 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:39:53.580697 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:39:53.580703 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:39:53.580709 | orchestrator | 2025-11-01 13:39:53.580715 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-11-01 13:39:53.580721 | orchestrator | Saturday 01 November 2025 13:39:39 +0000 (0:00:01.079) 0:00:04.093 ***** 2025-11-01 13:39:53.580728 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:53.580734 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:39:53.580740 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:39:53.580746 | orchestrator | 2025-11-01 13:39:53.580752 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-11-01 13:39:53.580758 | orchestrator | Saturday 01 November 2025 13:39:40 +0000 (0:00:00.332) 0:00:04.426 ***** 2025-11-01 13:39:53.580764 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:39:53.580770 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:39:53.580776 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:39:53.580782 | orchestrator | 2025-11-01 13:39:53.580788 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-01 13:39:53.580794 | orchestrator | Saturday 01 November 2025 13:39:40 +0000 (0:00:00.637) 0:00:05.064 ***** 2025-11-01 13:39:53.580801 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:39:53.580807 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:39:53.580813 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:39:53.580819 | orchestrator | 2025-11-01 13:39:53.580825 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-11-01 13:39:53.580831 | orchestrator | Saturday 01 November 2025 13:39:41 +0000 (0:00:00.329) 0:00:05.393 ***** 2025-11-01 13:39:53.580837 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:53.580844 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:39:53.580850 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:39:53.580856 | orchestrator | 2025-11-01 13:39:53.580862 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-11-01 13:39:53.580868 | orchestrator | Saturday 01 November 2025 13:39:41 +0000 (0:00:00.333) 0:00:05.727 ***** 2025-11-01 13:39:53.580874 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:39:53.580880 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:39:53.580886 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:39:53.580892 | orchestrator | 2025-11-01 13:39:53.580898 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-11-01 13:39:53.580904 | orchestrator | Saturday 01 November 2025 13:39:42 +0000 (0:00:00.550) 0:00:06.278 ***** 2025-11-01 13:39:53.580910 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:53.580916 | orchestrator | 2025-11-01 13:39:53.580922 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-11-01 13:39:53.580928 | orchestrator | Saturday 01 November 2025 13:39:42 +0000 (0:00:00.272) 0:00:06.551 ***** 2025-11-01 13:39:53.580934 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:53.580940 | orchestrator | 2025-11-01 13:39:53.580959 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-11-01 13:39:53.580967 | orchestrator | Saturday 01 November 2025 13:39:42 +0000 (0:00:00.269) 0:00:06.820 ***** 2025-11-01 13:39:53.580981 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:53.580988 | orchestrator | 2025-11-01 13:39:53.580995 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 13:39:53.581002 | orchestrator | Saturday 01 November 2025 13:39:42 +0000 (0:00:00.269) 0:00:07.090 ***** 2025-11-01 13:39:53.581009 | orchestrator | 2025-11-01 13:39:53.581016 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 13:39:53.581023 | orchestrator | Saturday 01 November 2025 13:39:42 +0000 (0:00:00.089) 0:00:07.180 ***** 2025-11-01 13:39:53.581030 | orchestrator | 2025-11-01 13:39:53.581037 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 13:39:53.581043 | orchestrator | Saturday 01 November 2025 13:39:43 +0000 (0:00:00.088) 0:00:07.268 ***** 2025-11-01 13:39:53.581055 | orchestrator | 2025-11-01 13:39:53.581062 | orchestrator | TASK [Print report file information] ******************************************* 2025-11-01 13:39:53.581070 | orchestrator | Saturday 01 November 2025 13:39:43 +0000 (0:00:00.083) 0:00:07.352 ***** 2025-11-01 13:39:53.581076 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:53.581083 | orchestrator | 2025-11-01 13:39:53.581090 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-11-01 13:39:53.581097 | orchestrator | Saturday 01 November 2025 13:39:43 +0000 (0:00:00.272) 0:00:07.624 ***** 2025-11-01 13:39:53.581103 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:53.581110 | orchestrator | 2025-11-01 13:39:53.581129 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-11-01 13:39:53.581136 | orchestrator | Saturday 01 November 2025 13:39:43 +0000 (0:00:00.291) 0:00:07.915 ***** 2025-11-01 13:39:53.581143 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:39:53.581150 | orchestrator | 2025-11-01 13:39:53.581157 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-11-01 13:39:53.581164 | orchestrator | Saturday 01 November 2025 13:39:43 +0000 (0:00:00.131) 0:00:08.047 ***** 2025-11-01 13:39:53.581171 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:39:53.581178 | orchestrator | 2025-11-01 13:39:53.581199 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-11-01 13:39:53.581207 | orchestrator | Saturday 01 November 2025 13:39:45 +0000 (0:00:01.849) 0:00:09.897 ***** 2025-11-01 13:39:53.581213 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:39:53.581220 | orchestrator | 2025-11-01 13:39:53.581227 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-11-01 13:39:53.581234 | orchestrator | Saturday 01 November 2025 13:39:46 +0000 (0:00:00.593) 0:00:10.490 ***** 2025-11-01 13:39:53.581241 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:53.581247 | orchestrator | 2025-11-01 13:39:53.581254 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-11-01 13:39:53.581261 | orchestrator | Saturday 01 November 2025 13:39:46 +0000 (0:00:00.120) 0:00:10.611 ***** 2025-11-01 13:39:53.581268 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:39:53.581275 | orchestrator | 2025-11-01 13:39:53.581282 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-11-01 13:39:53.581289 | orchestrator | Saturday 01 November 2025 13:39:46 +0000 (0:00:00.365) 0:00:10.977 ***** 2025-11-01 13:39:53.581296 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:39:53.581303 | orchestrator | 2025-11-01 13:39:53.581309 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-11-01 13:39:53.581316 | orchestrator | Saturday 01 November 2025 13:39:47 +0000 (0:00:00.340) 0:00:11.318 ***** 2025-11-01 13:39:53.581323 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:53.581346 | orchestrator | 2025-11-01 13:39:53.581353 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-11-01 13:39:53.581359 | orchestrator | Saturday 01 November 2025 13:39:47 +0000 (0:00:00.116) 0:00:11.435 ***** 2025-11-01 13:39:53.581365 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:39:53.581371 | orchestrator | 2025-11-01 13:39:53.581377 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-11-01 13:39:53.581384 | orchestrator | Saturday 01 November 2025 13:39:47 +0000 (0:00:00.136) 0:00:11.571 ***** 2025-11-01 13:39:53.581390 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:39:53.581396 | orchestrator | 2025-11-01 13:39:53.581402 | orchestrator | TASK [Gather status data] ****************************************************** 2025-11-01 13:39:53.581408 | orchestrator | Saturday 01 November 2025 13:39:47 +0000 (0:00:00.131) 0:00:11.702 ***** 2025-11-01 13:39:53.581414 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:39:53.581421 | orchestrator | 2025-11-01 13:39:53.581427 | orchestrator | TASK [Set health test data] **************************************************** 2025-11-01 13:39:53.581433 | orchestrator | Saturday 01 November 2025 13:39:49 +0000 (0:00:01.617) 0:00:13.319 ***** 2025-11-01 13:39:53.581444 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:39:53.581450 | orchestrator | 2025-11-01 13:39:53.581456 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-11-01 13:39:53.581463 | orchestrator | Saturday 01 November 2025 13:39:49 +0000 (0:00:00.330) 0:00:13.650 ***** 2025-11-01 13:39:53.581469 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:53.581475 | orchestrator | 2025-11-01 13:39:53.581481 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-11-01 13:39:53.581487 | orchestrator | Saturday 01 November 2025 13:39:49 +0000 (0:00:00.188) 0:00:13.838 ***** 2025-11-01 13:39:53.581493 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:39:53.581500 | orchestrator | 2025-11-01 13:39:53.581506 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-11-01 13:39:53.581512 | orchestrator | Saturday 01 November 2025 13:39:49 +0000 (0:00:00.154) 0:00:13.993 ***** 2025-11-01 13:39:53.581518 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:53.581524 | orchestrator | 2025-11-01 13:39:53.581530 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-11-01 13:39:53.581537 | orchestrator | Saturday 01 November 2025 13:39:49 +0000 (0:00:00.159) 0:00:14.152 ***** 2025-11-01 13:39:53.581543 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:53.581549 | orchestrator | 2025-11-01 13:39:53.581558 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-11-01 13:39:53.581565 | orchestrator | Saturday 01 November 2025 13:39:50 +0000 (0:00:00.355) 0:00:14.508 ***** 2025-11-01 13:39:53.581571 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 13:39:53.581577 | orchestrator | 2025-11-01 13:39:53.581583 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-11-01 13:39:53.581590 | orchestrator | Saturday 01 November 2025 13:39:50 +0000 (0:00:00.298) 0:00:14.807 ***** 2025-11-01 13:39:53.581596 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:39:53.581602 | orchestrator | 2025-11-01 13:39:53.581608 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-11-01 13:39:53.581614 | orchestrator | Saturday 01 November 2025 13:39:50 +0000 (0:00:00.299) 0:00:15.106 ***** 2025-11-01 13:39:53.581620 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 13:39:53.581626 | orchestrator | 2025-11-01 13:39:53.581633 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-11-01 13:39:53.581639 | orchestrator | Saturday 01 November 2025 13:39:52 +0000 (0:00:01.916) 0:00:17.023 ***** 2025-11-01 13:39:53.581645 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 13:39:53.581651 | orchestrator | 2025-11-01 13:39:53.581657 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-11-01 13:39:53.581663 | orchestrator | Saturday 01 November 2025 13:39:53 +0000 (0:00:00.276) 0:00:17.299 ***** 2025-11-01 13:39:53.581670 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 13:39:53.581676 | orchestrator | 2025-11-01 13:39:53.581686 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 13:39:56.565044 | orchestrator | Saturday 01 November 2025 13:39:53 +0000 (0:00:00.264) 0:00:17.564 ***** 2025-11-01 13:39:56.565147 | orchestrator | 2025-11-01 13:39:56.565162 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 13:39:56.565173 | orchestrator | Saturday 01 November 2025 13:39:53 +0000 (0:00:00.089) 0:00:17.654 ***** 2025-11-01 13:39:56.565183 | orchestrator | 2025-11-01 13:39:56.565192 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 13:39:56.565202 | orchestrator | Saturday 01 November 2025 13:39:53 +0000 (0:00:00.077) 0:00:17.731 ***** 2025-11-01 13:39:56.565212 | orchestrator | 2025-11-01 13:39:56.565221 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-11-01 13:39:56.565231 | orchestrator | Saturday 01 November 2025 13:39:53 +0000 (0:00:00.077) 0:00:17.809 ***** 2025-11-01 13:39:56.565240 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 13:39:56.565278 | orchestrator | 2025-11-01 13:39:56.565293 | orchestrator | TASK [Print report file information] ******************************************* 2025-11-01 13:39:56.565310 | orchestrator | Saturday 01 November 2025 13:39:55 +0000 (0:00:01.606) 0:00:19.416 ***** 2025-11-01 13:39:56.565326 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-11-01 13:39:56.565406 | orchestrator |  "msg": [ 2025-11-01 13:39:56.565424 | orchestrator |  "Validator run completed.", 2025-11-01 13:39:56.565439 | orchestrator |  "You can find the report file here:", 2025-11-01 13:39:56.565454 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-11-01T13:39:37+00:00-report.json", 2025-11-01 13:39:56.565472 | orchestrator |  "on the following host:", 2025-11-01 13:39:56.565488 | orchestrator |  "testbed-manager" 2025-11-01 13:39:56.565503 | orchestrator |  ] 2025-11-01 13:39:56.565518 | orchestrator | } 2025-11-01 13:39:56.565533 | orchestrator | 2025-11-01 13:39:56.565549 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:39:56.565567 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-11-01 13:39:56.565584 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:39:56.565601 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:39:56.565618 | orchestrator | 2025-11-01 13:39:56.565635 | orchestrator | 2025-11-01 13:39:56.565652 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:39:56.565668 | orchestrator | Saturday 01 November 2025 13:39:56 +0000 (0:00:00.955) 0:00:20.371 ***** 2025-11-01 13:39:56.565685 | orchestrator | =============================================================================== 2025-11-01 13:39:56.565703 | orchestrator | Aggregate test results step one ----------------------------------------- 1.92s 2025-11-01 13:39:56.565721 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.85s 2025-11-01 13:39:56.565737 | orchestrator | Gather status data ------------------------------------------------------ 1.62s 2025-11-01 13:39:56.565749 | orchestrator | Write report file ------------------------------------------------------- 1.61s 2025-11-01 13:39:56.565760 | orchestrator | Create report output directory ------------------------------------------ 1.12s 2025-11-01 13:39:56.565770 | orchestrator | Get container info ------------------------------------------------------ 1.08s 2025-11-01 13:39:56.565781 | orchestrator | Print report file information ------------------------------------------- 0.96s 2025-11-01 13:39:56.565792 | orchestrator | Get timestamp for report file ------------------------------------------- 0.89s 2025-11-01 13:39:56.565802 | orchestrator | Set test result to passed if container is existing ---------------------- 0.64s 2025-11-01 13:39:56.565813 | orchestrator | Set quorum test data ---------------------------------------------------- 0.59s 2025-11-01 13:39:56.565823 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.55s 2025-11-01 13:39:56.565839 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.37s 2025-11-01 13:39:56.565856 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.36s 2025-11-01 13:39:56.565872 | orchestrator | Prepare test data for container existance test -------------------------- 0.35s 2025-11-01 13:39:56.565888 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.34s 2025-11-01 13:39:56.565904 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.33s 2025-11-01 13:39:56.565921 | orchestrator | Set test result to failed if container is missing ----------------------- 0.33s 2025-11-01 13:39:56.565938 | orchestrator | Set health test data ---------------------------------------------------- 0.33s 2025-11-01 13:39:56.565954 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2025-11-01 13:39:56.565987 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.30s 2025-11-01 13:39:56.955763 | orchestrator | + osism validate ceph-mgrs 2025-11-01 13:40:30.508032 | orchestrator | 2025-11-01 13:40:30.508150 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-11-01 13:40:30.508167 | orchestrator | 2025-11-01 13:40:30.508179 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-11-01 13:40:30.508191 | orchestrator | Saturday 01 November 2025 13:40:14 +0000 (0:00:00.502) 0:00:00.502 ***** 2025-11-01 13:40:30.508202 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 13:40:30.508213 | orchestrator | 2025-11-01 13:40:30.508224 | orchestrator | TASK [Create report output directory] ****************************************** 2025-11-01 13:40:30.508235 | orchestrator | Saturday 01 November 2025 13:40:15 +0000 (0:00:00.939) 0:00:01.441 ***** 2025-11-01 13:40:30.508246 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 13:40:30.508256 | orchestrator | 2025-11-01 13:40:30.508267 | orchestrator | TASK [Define report vars] ****************************************************** 2025-11-01 13:40:30.508278 | orchestrator | Saturday 01 November 2025 13:40:16 +0000 (0:00:01.148) 0:00:02.590 ***** 2025-11-01 13:40:30.508289 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:40:30.508300 | orchestrator | 2025-11-01 13:40:30.508311 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-11-01 13:40:30.508322 | orchestrator | Saturday 01 November 2025 13:40:16 +0000 (0:00:00.153) 0:00:02.744 ***** 2025-11-01 13:40:30.508378 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:40:30.508390 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:40:30.508401 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:40:30.508412 | orchestrator | 2025-11-01 13:40:30.508422 | orchestrator | TASK [Get container info] ****************************************************** 2025-11-01 13:40:30.508433 | orchestrator | Saturday 01 November 2025 13:40:17 +0000 (0:00:00.339) 0:00:03.083 ***** 2025-11-01 13:40:30.508444 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:40:30.508455 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:40:30.508466 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:40:30.508477 | orchestrator | 2025-11-01 13:40:30.508488 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-11-01 13:40:30.508499 | orchestrator | Saturday 01 November 2025 13:40:18 +0000 (0:00:01.127) 0:00:04.211 ***** 2025-11-01 13:40:30.508509 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:40:30.508520 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:40:30.508531 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:40:30.508542 | orchestrator | 2025-11-01 13:40:30.508553 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-11-01 13:40:30.508565 | orchestrator | Saturday 01 November 2025 13:40:18 +0000 (0:00:00.352) 0:00:04.563 ***** 2025-11-01 13:40:30.508577 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:40:30.508589 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:40:30.508601 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:40:30.508612 | orchestrator | 2025-11-01 13:40:30.508625 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-01 13:40:30.508638 | orchestrator | Saturday 01 November 2025 13:40:19 +0000 (0:00:00.575) 0:00:05.138 ***** 2025-11-01 13:40:30.508650 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:40:30.508662 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:40:30.508674 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:40:30.508686 | orchestrator | 2025-11-01 13:40:30.508698 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-11-01 13:40:30.508710 | orchestrator | Saturday 01 November 2025 13:40:19 +0000 (0:00:00.372) 0:00:05.511 ***** 2025-11-01 13:40:30.508723 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:40:30.508735 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:40:30.508747 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:40:30.508759 | orchestrator | 2025-11-01 13:40:30.508771 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-11-01 13:40:30.508783 | orchestrator | Saturday 01 November 2025 13:40:19 +0000 (0:00:00.316) 0:00:05.827 ***** 2025-11-01 13:40:30.508820 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:40:30.508832 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:40:30.508844 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:40:30.508856 | orchestrator | 2025-11-01 13:40:30.508868 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-11-01 13:40:30.508881 | orchestrator | Saturday 01 November 2025 13:40:20 +0000 (0:00:00.561) 0:00:06.388 ***** 2025-11-01 13:40:30.508912 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:40:30.508924 | orchestrator | 2025-11-01 13:40:30.508934 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-11-01 13:40:30.508945 | orchestrator | Saturday 01 November 2025 13:40:20 +0000 (0:00:00.280) 0:00:06.669 ***** 2025-11-01 13:40:30.508956 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:40:30.508966 | orchestrator | 2025-11-01 13:40:30.508977 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-11-01 13:40:30.508988 | orchestrator | Saturday 01 November 2025 13:40:20 +0000 (0:00:00.255) 0:00:06.925 ***** 2025-11-01 13:40:30.508998 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:40:30.509008 | orchestrator | 2025-11-01 13:40:30.509019 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 13:40:30.509030 | orchestrator | Saturday 01 November 2025 13:40:21 +0000 (0:00:00.295) 0:00:07.220 ***** 2025-11-01 13:40:30.509040 | orchestrator | 2025-11-01 13:40:30.509058 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 13:40:30.509069 | orchestrator | Saturday 01 November 2025 13:40:21 +0000 (0:00:00.084) 0:00:07.305 ***** 2025-11-01 13:40:30.509079 | orchestrator | 2025-11-01 13:40:30.509090 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 13:40:30.509101 | orchestrator | Saturday 01 November 2025 13:40:21 +0000 (0:00:00.076) 0:00:07.381 ***** 2025-11-01 13:40:30.509112 | orchestrator | 2025-11-01 13:40:30.509123 | orchestrator | TASK [Print report file information] ******************************************* 2025-11-01 13:40:30.509133 | orchestrator | Saturday 01 November 2025 13:40:21 +0000 (0:00:00.089) 0:00:07.470 ***** 2025-11-01 13:40:30.509144 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:40:30.509155 | orchestrator | 2025-11-01 13:40:30.509165 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-11-01 13:40:30.509176 | orchestrator | Saturday 01 November 2025 13:40:21 +0000 (0:00:00.245) 0:00:07.716 ***** 2025-11-01 13:40:30.509186 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:40:30.509197 | orchestrator | 2025-11-01 13:40:30.509226 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-11-01 13:40:30.509238 | orchestrator | Saturday 01 November 2025 13:40:21 +0000 (0:00:00.261) 0:00:07.978 ***** 2025-11-01 13:40:30.509249 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:40:30.509259 | orchestrator | 2025-11-01 13:40:30.509270 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-11-01 13:40:30.509280 | orchestrator | Saturday 01 November 2025 13:40:22 +0000 (0:00:00.129) 0:00:08.107 ***** 2025-11-01 13:40:30.509291 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:40:30.509301 | orchestrator | 2025-11-01 13:40:30.509312 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-11-01 13:40:30.509322 | orchestrator | Saturday 01 November 2025 13:40:24 +0000 (0:00:02.270) 0:00:10.377 ***** 2025-11-01 13:40:30.509355 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:40:30.509366 | orchestrator | 2025-11-01 13:40:30.509377 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-11-01 13:40:30.509387 | orchestrator | Saturday 01 November 2025 13:40:24 +0000 (0:00:00.498) 0:00:10.876 ***** 2025-11-01 13:40:30.509398 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:40:30.509408 | orchestrator | 2025-11-01 13:40:30.509419 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-11-01 13:40:30.509429 | orchestrator | Saturday 01 November 2025 13:40:25 +0000 (0:00:00.382) 0:00:11.259 ***** 2025-11-01 13:40:30.509449 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:40:30.509460 | orchestrator | 2025-11-01 13:40:30.509470 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-11-01 13:40:30.509481 | orchestrator | Saturday 01 November 2025 13:40:25 +0000 (0:00:00.169) 0:00:11.428 ***** 2025-11-01 13:40:30.509492 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:40:30.509502 | orchestrator | 2025-11-01 13:40:30.509513 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-11-01 13:40:30.509524 | orchestrator | Saturday 01 November 2025 13:40:25 +0000 (0:00:00.161) 0:00:11.590 ***** 2025-11-01 13:40:30.509534 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 13:40:30.509545 | orchestrator | 2025-11-01 13:40:30.509556 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-11-01 13:40:30.509566 | orchestrator | Saturday 01 November 2025 13:40:25 +0000 (0:00:00.280) 0:00:11.871 ***** 2025-11-01 13:40:30.509577 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:40:30.509587 | orchestrator | 2025-11-01 13:40:30.509598 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-11-01 13:40:30.509608 | orchestrator | Saturday 01 November 2025 13:40:26 +0000 (0:00:00.275) 0:00:12.146 ***** 2025-11-01 13:40:30.509619 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 13:40:30.509629 | orchestrator | 2025-11-01 13:40:30.509640 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-11-01 13:40:30.509651 | orchestrator | Saturday 01 November 2025 13:40:27 +0000 (0:00:01.403) 0:00:13.550 ***** 2025-11-01 13:40:30.509661 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 13:40:30.509672 | orchestrator | 2025-11-01 13:40:30.509682 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-11-01 13:40:30.509693 | orchestrator | Saturday 01 November 2025 13:40:27 +0000 (0:00:00.289) 0:00:13.839 ***** 2025-11-01 13:40:30.509703 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 13:40:30.509714 | orchestrator | 2025-11-01 13:40:30.509724 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 13:40:30.509735 | orchestrator | Saturday 01 November 2025 13:40:28 +0000 (0:00:00.296) 0:00:14.136 ***** 2025-11-01 13:40:30.509745 | orchestrator | 2025-11-01 13:40:30.509756 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 13:40:30.509766 | orchestrator | Saturday 01 November 2025 13:40:28 +0000 (0:00:00.081) 0:00:14.218 ***** 2025-11-01 13:40:30.509777 | orchestrator | 2025-11-01 13:40:30.509787 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 13:40:30.509798 | orchestrator | Saturday 01 November 2025 13:40:28 +0000 (0:00:00.073) 0:00:14.291 ***** 2025-11-01 13:40:30.509808 | orchestrator | 2025-11-01 13:40:30.509818 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-11-01 13:40:30.509829 | orchestrator | Saturday 01 November 2025 13:40:28 +0000 (0:00:00.314) 0:00:14.606 ***** 2025-11-01 13:40:30.509839 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-11-01 13:40:30.509850 | orchestrator | 2025-11-01 13:40:30.509860 | orchestrator | TASK [Print report file information] ******************************************* 2025-11-01 13:40:30.509871 | orchestrator | Saturday 01 November 2025 13:40:30 +0000 (0:00:01.486) 0:00:16.092 ***** 2025-11-01 13:40:30.509881 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-11-01 13:40:30.509892 | orchestrator |  "msg": [ 2025-11-01 13:40:30.509903 | orchestrator |  "Validator run completed.", 2025-11-01 13:40:30.509913 | orchestrator |  "You can find the report file here:", 2025-11-01 13:40:30.509929 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-11-01T13:40:15+00:00-report.json", 2025-11-01 13:40:30.509941 | orchestrator |  "on the following host:", 2025-11-01 13:40:30.509952 | orchestrator |  "testbed-manager" 2025-11-01 13:40:30.509962 | orchestrator |  ] 2025-11-01 13:40:30.509973 | orchestrator | } 2025-11-01 13:40:30.509990 | orchestrator | 2025-11-01 13:40:30.510000 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:40:30.510067 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-11-01 13:40:30.510082 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:40:30.510101 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:40:30.951672 | orchestrator | 2025-11-01 13:40:30.951772 | orchestrator | 2025-11-01 13:40:30.951790 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:40:30.951805 | orchestrator | Saturday 01 November 2025 13:40:30 +0000 (0:00:00.469) 0:00:16.562 ***** 2025-11-01 13:40:30.951820 | orchestrator | =============================================================================== 2025-11-01 13:40:30.951834 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.27s 2025-11-01 13:40:30.951847 | orchestrator | Write report file ------------------------------------------------------- 1.49s 2025-11-01 13:40:30.951860 | orchestrator | Aggregate test results step one ----------------------------------------- 1.40s 2025-11-01 13:40:30.951874 | orchestrator | Create report output directory ------------------------------------------ 1.15s 2025-11-01 13:40:30.951887 | orchestrator | Get container info ------------------------------------------------------ 1.13s 2025-11-01 13:40:30.951899 | orchestrator | Get timestamp for report file ------------------------------------------- 0.94s 2025-11-01 13:40:30.951910 | orchestrator | Set test result to passed if container is existing ---------------------- 0.58s 2025-11-01 13:40:30.951922 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.56s 2025-11-01 13:40:30.951932 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.50s 2025-11-01 13:40:30.951944 | orchestrator | Flush handlers ---------------------------------------------------------- 0.47s 2025-11-01 13:40:30.951955 | orchestrator | Print report file information ------------------------------------------- 0.47s 2025-11-01 13:40:30.951966 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.38s 2025-11-01 13:40:30.951977 | orchestrator | Prepare test data ------------------------------------------------------- 0.37s 2025-11-01 13:40:30.951988 | orchestrator | Set test result to failed if container is missing ----------------------- 0.35s 2025-11-01 13:40:30.951999 | orchestrator | Prepare test data for container existance test -------------------------- 0.34s 2025-11-01 13:40:30.952011 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.32s 2025-11-01 13:40:30.952028 | orchestrator | Aggregate test results step three --------------------------------------- 0.30s 2025-11-01 13:40:30.952040 | orchestrator | Aggregate test results step three --------------------------------------- 0.30s 2025-11-01 13:40:30.952051 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2025-11-01 13:40:30.952063 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.28s 2025-11-01 13:40:31.367522 | orchestrator | + osism validate ceph-osds 2025-11-01 13:40:54.357083 | orchestrator | 2025-11-01 13:40:54.357939 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-11-01 13:40:54.357972 | orchestrator | 2025-11-01 13:40:54.357984 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-11-01 13:40:54.357996 | orchestrator | Saturday 01 November 2025 13:40:49 +0000 (0:00:00.506) 0:00:00.506 ***** 2025-11-01 13:40:54.358007 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 13:40:54.358064 | orchestrator | 2025-11-01 13:40:54.358078 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-11-01 13:40:54.358090 | orchestrator | Saturday 01 November 2025 13:40:50 +0000 (0:00:00.933) 0:00:01.439 ***** 2025-11-01 13:40:54.358101 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 13:40:54.358137 | orchestrator | 2025-11-01 13:40:54.358148 | orchestrator | TASK [Create report output directory] ****************************************** 2025-11-01 13:40:54.358159 | orchestrator | Saturday 01 November 2025 13:40:50 +0000 (0:00:00.640) 0:00:02.079 ***** 2025-11-01 13:40:54.358170 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 13:40:54.358180 | orchestrator | 2025-11-01 13:40:54.358191 | orchestrator | TASK [Define report vars] ****************************************************** 2025-11-01 13:40:54.358202 | orchestrator | Saturday 01 November 2025 13:40:51 +0000 (0:00:00.849) 0:00:02.929 ***** 2025-11-01 13:40:54.358213 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:40:54.358225 | orchestrator | 2025-11-01 13:40:54.358236 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-11-01 13:40:54.358247 | orchestrator | Saturday 01 November 2025 13:40:51 +0000 (0:00:00.151) 0:00:03.080 ***** 2025-11-01 13:40:54.358257 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:40:54.358269 | orchestrator | 2025-11-01 13:40:54.358279 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-11-01 13:40:54.358290 | orchestrator | Saturday 01 November 2025 13:40:51 +0000 (0:00:00.157) 0:00:03.238 ***** 2025-11-01 13:40:54.358301 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:40:54.358311 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:40:54.358322 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:40:54.358363 | orchestrator | 2025-11-01 13:40:54.358375 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-11-01 13:40:54.358386 | orchestrator | Saturday 01 November 2025 13:40:52 +0000 (0:00:00.341) 0:00:03.579 ***** 2025-11-01 13:40:54.358397 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:40:54.358407 | orchestrator | 2025-11-01 13:40:54.358418 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-11-01 13:40:54.358429 | orchestrator | Saturday 01 November 2025 13:40:52 +0000 (0:00:00.156) 0:00:03.736 ***** 2025-11-01 13:40:54.358439 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:40:54.358450 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:40:54.358461 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:40:54.358471 | orchestrator | 2025-11-01 13:40:54.358482 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-11-01 13:40:54.358493 | orchestrator | Saturday 01 November 2025 13:40:52 +0000 (0:00:00.391) 0:00:04.127 ***** 2025-11-01 13:40:54.358503 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:40:54.358514 | orchestrator | 2025-11-01 13:40:54.358525 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-01 13:40:54.358536 | orchestrator | Saturday 01 November 2025 13:40:53 +0000 (0:00:00.862) 0:00:04.990 ***** 2025-11-01 13:40:54.358546 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:40:54.358557 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:40:54.358567 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:40:54.358578 | orchestrator | 2025-11-01 13:40:54.358588 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-11-01 13:40:54.358599 | orchestrator | Saturday 01 November 2025 13:40:54 +0000 (0:00:00.344) 0:00:05.335 ***** 2025-11-01 13:40:54.358613 | orchestrator | skipping: [testbed-node-3] => (item={'id': '71beeae098c0970a520c9bfc393eaeab4d8bd5206b0ce1248e8e3abd04d28a6a', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 18 minutes (healthy)'})  2025-11-01 13:40:54.358627 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1e71f77688af4e033080f8c71da87c864cc77dccde61fa02a6bcae1a48353537', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 18 minutes (healthy)'})  2025-11-01 13:40:54.358638 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fdce37e6f72d79c900d2fcdaa1d011c34bf08db734755599b6c426f28985eea1', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 19 minutes (healthy)'})  2025-11-01 13:40:54.358701 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5b5ab66e91c5bd82bbf84d79dea4767f7b30674ba38e62f997493e931328bc56', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 19 minutes (healthy)'})  2025-11-01 13:40:54.358726 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'db023aeed0c66ac4faa564ba1ddaff39b7fc7691f3c6f4f21ab65c6f974b9abb', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 23 minutes (healthy)'})  2025-11-01 13:40:54.358772 | orchestrator | skipping: [testbed-node-3] => (item={'id': '652542514c68b062e0fbe3fa372b685361a98d8d7a55e1ca7f226795574a3df7', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 23 minutes (healthy)'})  2025-11-01 13:40:54.358786 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e7882b71a07c959b19f95421a21d6ec94716167f8cf3556ca94106f952b6a2f7', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 24 minutes'})  2025-11-01 13:40:54.358797 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7026fa8f312668ca3c9fd4ef304be8d8bb8d2ff839bd7f209bb50941f6f7e2e6', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 25 minutes'})  2025-11-01 13:40:54.358808 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0f27641758defae50d042cfad16a8a177fa20530fe0c362a9ac4f3ea52783efd', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 25 minutes'})  2025-11-01 13:40:54.358823 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4df5d31485b6aab8ddc8779414cbd6dc7defef7708a5a41bc88077c5ed5bb0be', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 32 minutes'})  2025-11-01 13:40:54.358835 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7fb7125a0a42d28fe6769090b2315aadc91d4f0e0704b6cc86d193a53bb31c53', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 34 minutes'})  2025-11-01 13:40:54.358850 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7a65fbefd033bad33bb6ba8f8dcc7f18457caab80221fa4eb74ec305ee70b151', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 34 minutes'})  2025-11-01 13:40:54.358862 | orchestrator | ok: [testbed-node-3] => (item={'id': '50d91e4383a3109f1bb2532385d465ee8d250e9539368a97f4ca5c42c423e28a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 35 minutes'}) 2025-11-01 13:40:54.358873 | orchestrator | ok: [testbed-node-3] => (item={'id': '90e61af3d3465657527253d8721250c544a90f85f223dd4dc3f2c336c708c164', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 35 minutes'}) 2025-11-01 13:40:54.358884 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fa6cfae16cc0134097aeec5c9a3a6889e2b095a5cd797f92c60f6b1814c0eef9', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 39 minutes'})  2025-11-01 13:40:54.358896 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f867dc585a64475af7a104662019c8e60bc8c414e7efee5fb45e68f98e04d05d', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2025-11-01 13:40:54.358907 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'dd165fddb58cbb69f8b4c54e4d170be604d94512f96299d17c44b85123c730f2', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2025-11-01 13:40:54.358925 | orchestrator | skipping: [testbed-node-3] => (item={'id': '496a7844366edafbdd22d92b34ede8197347aae6459697467cb76aa03acfb2ea', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 42 minutes'})  2025-11-01 13:40:54.358936 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c2d8875425b400c6654f8d1084b42c288eac8ccb5a8b5a11627d5516ee33b758', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 42 minutes'})  2025-11-01 13:40:54.358947 | orchestrator | skipping: [testbed-node-3] => (item={'id': '911ac590eca882fc85a1a267c77c77476b468ce4e1120485c643afc350d353ef', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 43 minutes'})  2025-11-01 13:40:54.358958 | orchestrator | skipping: [testbed-node-4] => (item={'id': '237a67377452a720ac8624c25c65493b39a37a04c7070c73dc0983a4f56bda3d', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 18 minutes (healthy)'})  2025-11-01 13:40:54.358977 | orchestrator | skipping: [testbed-node-4] => (item={'id': '476f14874ddea477963ec234f631c428b333585dfbde3973f11b58f6537cd7a7', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 18 minutes (healthy)'})  2025-11-01 13:40:54.645919 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7807a83aad1ec94f8a31413b3b7ad0dd699d03b29eb3ebb8b27d24d2d2122a81', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 19 minutes (healthy)'})  2025-11-01 13:40:54.645988 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a56aa6e41c6420dfe47b4e529567aaf1025b4b923e464c38871c37817dc4ffb6', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 19 minutes (healthy)'})  2025-11-01 13:40:54.646002 | orchestrator | skipping: [testbed-node-4] => (item={'id': '006e43ade0f8269ae0d297a2879ba8308f7e38fb2842614516db863d7808faef', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 23 minutes (healthy)'})  2025-11-01 13:40:54.646014 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4c5b535769324b3619548962cd27dff6e15bcea885a7ae3b15ad0c185905b11e', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 23 minutes (healthy)'})  2025-11-01 13:40:54.646073 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f8f2ec5b87923dc1f3b92b0213b64b2e17c0abef282fab29880868f1f077a993', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 24 minutes'})  2025-11-01 13:40:54.646097 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'da6183ce16c81b00e3e983b9a7a13707381f1cd6c5a432e3f8a91039147b7b29', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 25 minutes'})  2025-11-01 13:40:54.646109 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8ef82d2242ca4ed522abad49324cab2055ee9fb71bd32140e0fe6964cba2238c', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 25 minutes'})  2025-11-01 13:40:54.646120 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3ddd57e830e2891793aed6a8e552a84c4632d18efe3225877799ba59c44f6a30', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 32 minutes'})  2025-11-01 13:40:54.646131 | orchestrator | skipping: [testbed-node-4] => (item={'id': '96730f3840ffdb62c268acbe3b897701fb537618680b24e0b24888b066742c56', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 34 minutes'})  2025-11-01 13:40:54.646160 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5d36635eadcbac6b3aec4a9036d05bb19b7b3fcf542305aea1a57f1e96215e9b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 34 minutes'})  2025-11-01 13:40:54.646174 | orchestrator | ok: [testbed-node-4] => (item={'id': 'd2cfd76cc51d45714b79496d4cc940c3a6d8a56cfe347782948eee02cb63b843', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 35 minutes'}) 2025-11-01 13:40:54.646185 | orchestrator | ok: [testbed-node-4] => (item={'id': 'ae530ee272845e0740fcc97664003a6e6f6ca287aef4edd770165f1f8be61761', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 35 minutes'}) 2025-11-01 13:40:54.646196 | orchestrator | skipping: [testbed-node-4] => (item={'id': '95d541b3e4e843fecdcf3acc02a23b504ef47adaa8329dc5d701ccbaf3af8851', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 39 minutes'})  2025-11-01 13:40:54.646208 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4f987ace35e672ad19cc9326a93af12889133a528eb0d9c79fdb1f5dca70753e', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2025-11-01 13:40:54.646219 | orchestrator | skipping: [testbed-node-4] => (item={'id': '527a5363cb8ba35050643a0cfab3f3b8574fa26c3f10da1f52485c21b03a29e0', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2025-11-01 13:40:54.646245 | orchestrator | skipping: [testbed-node-4] => (item={'id': '337299e7e5a09d54274393fd223050fbaa28966a9176da64c6c67413906e68f2', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 42 minutes'})  2025-11-01 13:40:54.646257 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c071356c8d40add90570341a4c7e150f98a6b0dd86c8074b589cb4d5007dd47c', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 42 minutes'})  2025-11-01 13:40:54.646268 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cc1edf0b46b397f3430248b8a3de14c0d7ec785ebe77b70232ca7dba444e1d8c', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 43 minutes'})  2025-11-01 13:40:54.646279 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c70d0d71810ede58d80c1720af1bc10fe919c196e6f3c3e2c859f25939c07ace', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 18 minutes (healthy)'})  2025-11-01 13:40:54.646291 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b6204f088eacaecdb959ca50e531d5a304333274a6b70317dc99ffd4a981f4ef', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 18 minutes (healthy)'})  2025-11-01 13:40:54.646302 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3fdb2d4103acfbb910563a8af3b0497a74217aa22fd2334f3647ae4346f18ca8', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 19 minutes (healthy)'})  2025-11-01 13:40:54.646313 | orchestrator | skipping: [testbed-node-5] => (item={'id': '83dd814fc1dc2510bed532123b5aff5499bb9e3ae1499aec9fda00735162e181', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 19 minutes (healthy)'})  2025-11-01 13:40:54.646325 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd6b5a5a6711fdf8e8d69090b2b9c56b0c662ba5799018751dbcefc03c4655319', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 23 minutes (healthy)'})  2025-11-01 13:40:54.646367 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd5f2f16ef924396df6b8ec645ecd33091bc5f9e908f119f5e43bf1b64bb744ab', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 23 minutes (healthy)'})  2025-11-01 13:40:54.646379 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e684945d5aad0d1a5940d68cac51a2771eac0b1cb1509e7034fcfd333b479db0', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 24 minutes'})  2025-11-01 13:40:54.646390 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b0595c1f67a870db5a2ce5af64ebbcf7addbbf27817cf22d8cb43915bbc29cd3', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 25 minutes'})  2025-11-01 13:40:54.646401 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3f7d85db54cb196b1899977589d61b1dd859f649341604e16ecd9fc3d79b0c7a', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 25 minutes'})  2025-11-01 13:40:54.646412 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0d9528ad3689e5daa5f7a063bc5361539e7ae4468ab01a33b178a9342ac91f4b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 32 minutes'})  2025-11-01 13:40:54.646424 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4c3821b4d9b8a0e90cd96bfca757aa988267d0d6b78f462d3d8e8e78cdda4056', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 34 minutes'})  2025-11-01 13:40:54.646435 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6754a4af283354c3a1738ebc3565c1f33528f5b27ad0a376376b4b06417fd3c2', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 34 minutes'})  2025-11-01 13:40:54.646453 | orchestrator | ok: [testbed-node-5] => (item={'id': '29f98580ccbd1387043203453221543b49590a14dd510bc6e230de4339e1c9b4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 35 minutes'}) 2025-11-01 13:41:04.337217 | orchestrator | ok: [testbed-node-5] => (item={'id': '14b91c7ad62589446aabe990d4bf76607160dfd8fe3f3b1728dfca7dca83ea97', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 35 minutes'}) 2025-11-01 13:41:04.337387 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8b326c5dcd1226ed33f5ad485a3387d36d63bfd009fe90cec890159b004131a1', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 39 minutes'})  2025-11-01 13:41:04.337410 | orchestrator | skipping: [testbed-node-5] => (item={'id': '926c451aa0cf2fcce091000e3a1a72875762313c986e9ca90a3c01c8981ab1bd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2025-11-01 13:41:04.337425 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b818f43726b636f5525742501f6a8d2acfa83fa8c5629b6fa9c3bb66f77b1e18', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2025-11-01 13:41:04.337437 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b514bfdb5e4f1b0a1bf591c14078313a66ac54dd9775d3f68fec8c4bedb38c34', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 42 minutes'})  2025-11-01 13:41:04.337453 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'be76f725e3a88c285d62ccba9d97bbd171f271b0cfa40d14595b241aa772bef8', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 42 minutes'})  2025-11-01 13:41:04.337488 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'aa27662533b49500ecb674439b56a3136c2f4f4aeb6ffa5dc3904e5207fc0eaa', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 43 minutes'})  2025-11-01 13:41:04.337501 | orchestrator | 2025-11-01 13:41:04.337513 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-11-01 13:41:04.337526 | orchestrator | Saturday 01 November 2025 13:40:54 +0000 (0:00:00.583) 0:00:05.918 ***** 2025-11-01 13:41:04.337537 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:04.337548 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:04.337559 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:04.337569 | orchestrator | 2025-11-01 13:41:04.337580 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-11-01 13:41:04.337591 | orchestrator | Saturday 01 November 2025 13:40:54 +0000 (0:00:00.340) 0:00:06.259 ***** 2025-11-01 13:41:04.337602 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:04.337614 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:41:04.337624 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:41:04.337635 | orchestrator | 2025-11-01 13:41:04.337646 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-11-01 13:41:04.337656 | orchestrator | Saturday 01 November 2025 13:40:55 +0000 (0:00:00.545) 0:00:06.805 ***** 2025-11-01 13:41:04.337667 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:04.337678 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:04.337688 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:04.337699 | orchestrator | 2025-11-01 13:41:04.337709 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-01 13:41:04.337720 | orchestrator | Saturday 01 November 2025 13:40:55 +0000 (0:00:00.362) 0:00:07.167 ***** 2025-11-01 13:41:04.337730 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:04.337741 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:04.337751 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:04.337762 | orchestrator | 2025-11-01 13:41:04.337772 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-11-01 13:41:04.337783 | orchestrator | Saturday 01 November 2025 13:40:56 +0000 (0:00:00.340) 0:00:07.507 ***** 2025-11-01 13:41:04.337794 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-11-01 13:41:04.337806 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-11-01 13:41:04.337817 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:04.337828 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-11-01 13:41:04.337839 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-11-01 13:41:04.337849 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:41:04.337860 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-11-01 13:41:04.337871 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-11-01 13:41:04.337882 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:41:04.337893 | orchestrator | 2025-11-01 13:41:04.337904 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-11-01 13:41:04.337914 | orchestrator | Saturday 01 November 2025 13:40:56 +0000 (0:00:00.357) 0:00:07.865 ***** 2025-11-01 13:41:04.337925 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:04.337935 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:04.337946 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:04.337956 | orchestrator | 2025-11-01 13:41:04.337985 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-11-01 13:41:04.337997 | orchestrator | Saturday 01 November 2025 13:40:57 +0000 (0:00:00.554) 0:00:08.420 ***** 2025-11-01 13:41:04.338008 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:04.338073 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:41:04.338096 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:41:04.338107 | orchestrator | 2025-11-01 13:41:04.338118 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-11-01 13:41:04.338129 | orchestrator | Saturday 01 November 2025 13:40:57 +0000 (0:00:00.345) 0:00:08.766 ***** 2025-11-01 13:41:04.338139 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:04.338150 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:41:04.338161 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:41:04.338171 | orchestrator | 2025-11-01 13:41:04.338182 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-11-01 13:41:04.338193 | orchestrator | Saturday 01 November 2025 13:40:57 +0000 (0:00:00.356) 0:00:09.122 ***** 2025-11-01 13:41:04.338204 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:04.338214 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:04.338225 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:04.338235 | orchestrator | 2025-11-01 13:41:04.338246 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-11-01 13:41:04.338257 | orchestrator | Saturday 01 November 2025 13:40:58 +0000 (0:00:00.316) 0:00:09.439 ***** 2025-11-01 13:41:04.338268 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:04.338278 | orchestrator | 2025-11-01 13:41:04.338290 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-11-01 13:41:04.338301 | orchestrator | Saturday 01 November 2025 13:40:58 +0000 (0:00:00.784) 0:00:10.223 ***** 2025-11-01 13:41:04.338311 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:04.338322 | orchestrator | 2025-11-01 13:41:04.338351 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-11-01 13:41:04.338363 | orchestrator | Saturday 01 November 2025 13:40:59 +0000 (0:00:00.282) 0:00:10.506 ***** 2025-11-01 13:41:04.338374 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:04.338390 | orchestrator | 2025-11-01 13:41:04.338401 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 13:41:04.338412 | orchestrator | Saturday 01 November 2025 13:40:59 +0000 (0:00:00.266) 0:00:10.773 ***** 2025-11-01 13:41:04.338422 | orchestrator | 2025-11-01 13:41:04.338433 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 13:41:04.338444 | orchestrator | Saturday 01 November 2025 13:40:59 +0000 (0:00:00.075) 0:00:10.848 ***** 2025-11-01 13:41:04.338454 | orchestrator | 2025-11-01 13:41:04.338465 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 13:41:04.338476 | orchestrator | Saturday 01 November 2025 13:40:59 +0000 (0:00:00.083) 0:00:10.931 ***** 2025-11-01 13:41:04.338486 | orchestrator | 2025-11-01 13:41:04.338497 | orchestrator | TASK [Print report file information] ******************************************* 2025-11-01 13:41:04.338508 | orchestrator | Saturday 01 November 2025 13:40:59 +0000 (0:00:00.086) 0:00:11.017 ***** 2025-11-01 13:41:04.338519 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:04.338529 | orchestrator | 2025-11-01 13:41:04.338540 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-11-01 13:41:04.338551 | orchestrator | Saturday 01 November 2025 13:41:00 +0000 (0:00:00.273) 0:00:11.290 ***** 2025-11-01 13:41:04.338562 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:04.338572 | orchestrator | 2025-11-01 13:41:04.338583 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-01 13:41:04.338594 | orchestrator | Saturday 01 November 2025 13:41:00 +0000 (0:00:00.261) 0:00:11.552 ***** 2025-11-01 13:41:04.338604 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:04.338615 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:04.338626 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:04.338636 | orchestrator | 2025-11-01 13:41:04.338647 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-11-01 13:41:04.338658 | orchestrator | Saturday 01 November 2025 13:41:00 +0000 (0:00:00.349) 0:00:11.901 ***** 2025-11-01 13:41:04.338668 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:04.338679 | orchestrator | 2025-11-01 13:41:04.338696 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-11-01 13:41:04.338707 | orchestrator | Saturday 01 November 2025 13:41:01 +0000 (0:00:00.775) 0:00:12.676 ***** 2025-11-01 13:41:04.338718 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-11-01 13:41:04.338729 | orchestrator | 2025-11-01 13:41:04.338740 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-11-01 13:41:04.338750 | orchestrator | Saturday 01 November 2025 13:41:03 +0000 (0:00:01.804) 0:00:14.481 ***** 2025-11-01 13:41:04.338761 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:04.338772 | orchestrator | 2025-11-01 13:41:04.338782 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-11-01 13:41:04.338793 | orchestrator | Saturday 01 November 2025 13:41:03 +0000 (0:00:00.178) 0:00:14.660 ***** 2025-11-01 13:41:04.338804 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:04.338814 | orchestrator | 2025-11-01 13:41:04.338825 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-11-01 13:41:04.338836 | orchestrator | Saturday 01 November 2025 13:41:03 +0000 (0:00:00.376) 0:00:15.037 ***** 2025-11-01 13:41:04.338847 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:04.338858 | orchestrator | 2025-11-01 13:41:04.338868 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-11-01 13:41:04.338879 | orchestrator | Saturday 01 November 2025 13:41:03 +0000 (0:00:00.122) 0:00:15.160 ***** 2025-11-01 13:41:04.338890 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:04.338901 | orchestrator | 2025-11-01 13:41:04.338911 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-01 13:41:04.338922 | orchestrator | Saturday 01 November 2025 13:41:04 +0000 (0:00:00.140) 0:00:15.301 ***** 2025-11-01 13:41:04.338933 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:04.338943 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:04.338954 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:04.338964 | orchestrator | 2025-11-01 13:41:04.338975 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-11-01 13:41:04.338995 | orchestrator | Saturday 01 November 2025 13:41:04 +0000 (0:00:00.317) 0:00:15.618 ***** 2025-11-01 13:41:18.508401 | orchestrator | changed: [testbed-node-3] 2025-11-01 13:41:18.508507 | orchestrator | changed: [testbed-node-4] 2025-11-01 13:41:18.508522 | orchestrator | changed: [testbed-node-5] 2025-11-01 13:41:18.508534 | orchestrator | 2025-11-01 13:41:18.508546 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-11-01 13:41:18.508558 | orchestrator | Saturday 01 November 2025 13:41:07 +0000 (0:00:02.754) 0:00:18.372 ***** 2025-11-01 13:41:18.508569 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:18.508580 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:18.508591 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:18.508601 | orchestrator | 2025-11-01 13:41:18.508612 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-11-01 13:41:18.508623 | orchestrator | Saturday 01 November 2025 13:41:07 +0000 (0:00:00.340) 0:00:18.713 ***** 2025-11-01 13:41:18.508634 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:18.508644 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:18.508654 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:18.508665 | orchestrator | 2025-11-01 13:41:18.508675 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-11-01 13:41:18.508686 | orchestrator | Saturday 01 November 2025 13:41:08 +0000 (0:00:00.627) 0:00:19.340 ***** 2025-11-01 13:41:18.508697 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:18.508707 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:41:18.508718 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:41:18.508729 | orchestrator | 2025-11-01 13:41:18.508740 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-11-01 13:41:18.508751 | orchestrator | Saturday 01 November 2025 13:41:08 +0000 (0:00:00.335) 0:00:19.676 ***** 2025-11-01 13:41:18.508761 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:18.508772 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:18.508807 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:18.508818 | orchestrator | 2025-11-01 13:41:18.508828 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-11-01 13:41:18.508854 | orchestrator | Saturday 01 November 2025 13:41:08 +0000 (0:00:00.590) 0:00:20.267 ***** 2025-11-01 13:41:18.508866 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:18.508876 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:41:18.508887 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:41:18.508898 | orchestrator | 2025-11-01 13:41:18.508908 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-11-01 13:41:18.508921 | orchestrator | Saturday 01 November 2025 13:41:09 +0000 (0:00:00.318) 0:00:20.586 ***** 2025-11-01 13:41:18.508933 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:18.508945 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:41:18.508958 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:41:18.508970 | orchestrator | 2025-11-01 13:41:18.508982 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-11-01 13:41:18.508994 | orchestrator | Saturday 01 November 2025 13:41:09 +0000 (0:00:00.331) 0:00:20.918 ***** 2025-11-01 13:41:18.509006 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:18.509018 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:18.509030 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:18.509042 | orchestrator | 2025-11-01 13:41:18.509054 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-11-01 13:41:18.509066 | orchestrator | Saturday 01 November 2025 13:41:10 +0000 (0:00:00.547) 0:00:21.465 ***** 2025-11-01 13:41:18.509078 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:18.509090 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:18.509101 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:18.509113 | orchestrator | 2025-11-01 13:41:18.509126 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-11-01 13:41:18.509138 | orchestrator | Saturday 01 November 2025 13:41:11 +0000 (0:00:00.886) 0:00:22.351 ***** 2025-11-01 13:41:18.509150 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:18.509162 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:18.509174 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:18.509186 | orchestrator | 2025-11-01 13:41:18.509198 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-11-01 13:41:18.509210 | orchestrator | Saturday 01 November 2025 13:41:11 +0000 (0:00:00.364) 0:00:22.716 ***** 2025-11-01 13:41:18.509222 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:18.509234 | orchestrator | skipping: [testbed-node-4] 2025-11-01 13:41:18.509247 | orchestrator | skipping: [testbed-node-5] 2025-11-01 13:41:18.509258 | orchestrator | 2025-11-01 13:41:18.509271 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-11-01 13:41:18.509283 | orchestrator | Saturday 01 November 2025 13:41:11 +0000 (0:00:00.382) 0:00:23.099 ***** 2025-11-01 13:41:18.509294 | orchestrator | ok: [testbed-node-3] 2025-11-01 13:41:18.509305 | orchestrator | ok: [testbed-node-4] 2025-11-01 13:41:18.509316 | orchestrator | ok: [testbed-node-5] 2025-11-01 13:41:18.509326 | orchestrator | 2025-11-01 13:41:18.509358 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-11-01 13:41:18.509370 | orchestrator | Saturday 01 November 2025 13:41:12 +0000 (0:00:00.602) 0:00:23.701 ***** 2025-11-01 13:41:18.509380 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 13:41:18.509391 | orchestrator | 2025-11-01 13:41:18.509402 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-11-01 13:41:18.509413 | orchestrator | Saturday 01 November 2025 13:41:12 +0000 (0:00:00.277) 0:00:23.979 ***** 2025-11-01 13:41:18.509423 | orchestrator | skipping: [testbed-node-3] 2025-11-01 13:41:18.509434 | orchestrator | 2025-11-01 13:41:18.509445 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-11-01 13:41:18.509456 | orchestrator | Saturday 01 November 2025 13:41:12 +0000 (0:00:00.261) 0:00:24.240 ***** 2025-11-01 13:41:18.509476 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 13:41:18.509487 | orchestrator | 2025-11-01 13:41:18.509498 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-11-01 13:41:18.509508 | orchestrator | Saturday 01 November 2025 13:41:14 +0000 (0:00:01.908) 0:00:26.148 ***** 2025-11-01 13:41:18.509519 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 13:41:18.509530 | orchestrator | 2025-11-01 13:41:18.509540 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-11-01 13:41:18.509551 | orchestrator | Saturday 01 November 2025 13:41:15 +0000 (0:00:00.326) 0:00:26.474 ***** 2025-11-01 13:41:18.509578 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 13:41:18.509590 | orchestrator | 2025-11-01 13:41:18.509601 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 13:41:18.509612 | orchestrator | Saturday 01 November 2025 13:41:15 +0000 (0:00:00.298) 0:00:26.773 ***** 2025-11-01 13:41:18.509622 | orchestrator | 2025-11-01 13:41:18.509633 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 13:41:18.509643 | orchestrator | Saturday 01 November 2025 13:41:15 +0000 (0:00:00.075) 0:00:26.849 ***** 2025-11-01 13:41:18.509654 | orchestrator | 2025-11-01 13:41:18.509665 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-11-01 13:41:18.509675 | orchestrator | Saturday 01 November 2025 13:41:15 +0000 (0:00:00.071) 0:00:26.920 ***** 2025-11-01 13:41:18.509686 | orchestrator | 2025-11-01 13:41:18.509696 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-11-01 13:41:18.509707 | orchestrator | Saturday 01 November 2025 13:41:15 +0000 (0:00:00.080) 0:00:27.000 ***** 2025-11-01 13:41:18.509718 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-11-01 13:41:18.509728 | orchestrator | 2025-11-01 13:41:18.509739 | orchestrator | TASK [Print report file information] ******************************************* 2025-11-01 13:41:18.509749 | orchestrator | Saturday 01 November 2025 13:41:17 +0000 (0:00:01.691) 0:00:28.692 ***** 2025-11-01 13:41:18.509760 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-11-01 13:41:18.509771 | orchestrator |  "msg": [ 2025-11-01 13:41:18.509782 | orchestrator |  "Validator run completed.", 2025-11-01 13:41:18.509792 | orchestrator |  "You can find the report file here:", 2025-11-01 13:41:18.509803 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-11-01T13:40:50+00:00-report.json", 2025-11-01 13:41:18.509814 | orchestrator |  "on the following host:", 2025-11-01 13:41:18.509826 | orchestrator |  "testbed-manager" 2025-11-01 13:41:18.509836 | orchestrator |  ] 2025-11-01 13:41:18.509847 | orchestrator | } 2025-11-01 13:41:18.509858 | orchestrator | 2025-11-01 13:41:18.509869 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:41:18.509881 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-11-01 13:41:18.509893 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-11-01 13:41:18.509904 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-11-01 13:41:18.509915 | orchestrator | 2025-11-01 13:41:18.509926 | orchestrator | 2025-11-01 13:41:18.509936 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:41:18.509947 | orchestrator | Saturday 01 November 2025 13:41:18 +0000 (0:00:00.686) 0:00:29.379 ***** 2025-11-01 13:41:18.509958 | orchestrator | =============================================================================== 2025-11-01 13:41:18.509968 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.75s 2025-11-01 13:41:18.509979 | orchestrator | Aggregate test results step one ----------------------------------------- 1.91s 2025-11-01 13:41:18.509996 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.80s 2025-11-01 13:41:18.510007 | orchestrator | Write report file ------------------------------------------------------- 1.69s 2025-11-01 13:41:18.510103 | orchestrator | Get timestamp for report file ------------------------------------------- 0.93s 2025-11-01 13:41:18.510119 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.89s 2025-11-01 13:41:18.510130 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.86s 2025-11-01 13:41:18.510141 | orchestrator | Create report output directory ------------------------------------------ 0.85s 2025-11-01 13:41:18.510152 | orchestrator | Aggregate test results step one ----------------------------------------- 0.78s 2025-11-01 13:41:18.510162 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.78s 2025-11-01 13:41:18.510173 | orchestrator | Print report file information ------------------------------------------- 0.69s 2025-11-01 13:41:18.510184 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.64s 2025-11-01 13:41:18.510194 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.63s 2025-11-01 13:41:18.510205 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.60s 2025-11-01 13:41:18.510216 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.59s 2025-11-01 13:41:18.510226 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.58s 2025-11-01 13:41:18.510237 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.55s 2025-11-01 13:41:18.510248 | orchestrator | Prepare test data ------------------------------------------------------- 0.55s 2025-11-01 13:41:18.510259 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.55s 2025-11-01 13:41:18.510269 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.39s 2025-11-01 13:41:18.930956 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-11-01 13:41:18.937982 | orchestrator | + set -e 2025-11-01 13:41:18.938007 | orchestrator | + source /opt/manager-vars.sh 2025-11-01 13:41:18.938057 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-11-01 13:41:18.938068 | orchestrator | ++ NUMBER_OF_NODES=6 2025-11-01 13:41:18.938078 | orchestrator | ++ export CEPH_VERSION=reef 2025-11-01 13:41:18.938087 | orchestrator | ++ CEPH_VERSION=reef 2025-11-01 13:41:18.938097 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-11-01 13:41:18.938108 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-11-01 13:41:18.938117 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-01 13:41:18.938127 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-01 13:41:18.938136 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-11-01 13:41:18.938146 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-11-01 13:41:18.938156 | orchestrator | ++ export ARA=false 2025-11-01 13:41:18.938166 | orchestrator | ++ ARA=false 2025-11-01 13:41:18.938176 | orchestrator | ++ export DEPLOY_MODE=manager 2025-11-01 13:41:18.938185 | orchestrator | ++ DEPLOY_MODE=manager 2025-11-01 13:41:18.938195 | orchestrator | ++ export TEMPEST=false 2025-11-01 13:41:18.938204 | orchestrator | ++ TEMPEST=false 2025-11-01 13:41:18.938213 | orchestrator | ++ export IS_ZUUL=true 2025-11-01 13:41:18.938223 | orchestrator | ++ IS_ZUUL=true 2025-11-01 13:41:18.938232 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-11-01 13:41:18.938242 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2025-11-01 13:41:18.938251 | orchestrator | ++ export EXTERNAL_API=false 2025-11-01 13:41:18.938261 | orchestrator | ++ EXTERNAL_API=false 2025-11-01 13:41:18.938270 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-11-01 13:41:18.938279 | orchestrator | ++ IMAGE_USER=ubuntu 2025-11-01 13:41:18.938289 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-11-01 13:41:18.938298 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-11-01 13:41:18.938308 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-11-01 13:41:18.938317 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-11-01 13:41:18.938327 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-11-01 13:41:18.938373 | orchestrator | + source /etc/os-release 2025-11-01 13:41:18.938383 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2025-11-01 13:41:18.938393 | orchestrator | ++ NAME=Ubuntu 2025-11-01 13:41:18.938403 | orchestrator | ++ VERSION_ID=24.04 2025-11-01 13:41:18.938412 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2025-11-01 13:41:18.938442 | orchestrator | ++ VERSION_CODENAME=noble 2025-11-01 13:41:18.938452 | orchestrator | ++ ID=ubuntu 2025-11-01 13:41:18.938461 | orchestrator | ++ ID_LIKE=debian 2025-11-01 13:41:18.938471 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-11-01 13:41:18.938481 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-11-01 13:41:18.938490 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-11-01 13:41:18.938500 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-11-01 13:41:18.938510 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-11-01 13:41:18.938520 | orchestrator | ++ LOGO=ubuntu-logo 2025-11-01 13:41:18.938530 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-11-01 13:41:18.938540 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-11-01 13:41:18.938552 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-11-01 13:41:18.961901 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-11-01 13:41:43.533501 | orchestrator | 2025-11-01 13:41:43.533609 | orchestrator | # Status of Elasticsearch 2025-11-01 13:41:43.533626 | orchestrator | 2025-11-01 13:41:43.533638 | orchestrator | + pushd /opt/configuration/contrib 2025-11-01 13:41:43.533650 | orchestrator | + echo 2025-11-01 13:41:43.533661 | orchestrator | + echo '# Status of Elasticsearch' 2025-11-01 13:41:43.533672 | orchestrator | + echo 2025-11-01 13:41:43.533683 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-11-01 13:41:43.695393 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-11-01 13:41:43.695446 | orchestrator | 2025-11-01 13:41:43.695459 | orchestrator | # Status of MariaDB 2025-11-01 13:41:43.695470 | orchestrator | 2025-11-01 13:41:43.695482 | orchestrator | + echo 2025-11-01 13:41:43.695493 | orchestrator | + echo '# Status of MariaDB' 2025-11-01 13:41:43.695504 | orchestrator | + echo 2025-11-01 13:41:43.695515 | orchestrator | + MARIADB_USER=root_shard_0 2025-11-01 13:41:43.695527 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-11-01 13:41:43.757233 | orchestrator | Reading package lists... 2025-11-01 13:41:44.199756 | orchestrator | Building dependency tree... 2025-11-01 13:41:44.200412 | orchestrator | Reading state information... 2025-11-01 13:41:44.774914 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-11-01 13:41:44.775011 | orchestrator | bc set to manually installed. 2025-11-01 13:41:44.775036 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-11-01 13:41:45.461908 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-11-01 13:41:45.462258 | orchestrator | 2025-11-01 13:41:45.462395 | orchestrator | # Status of Prometheus 2025-11-01 13:41:45.462413 | orchestrator | 2025-11-01 13:41:45.462425 | orchestrator | + echo 2025-11-01 13:41:45.462437 | orchestrator | + echo '# Status of Prometheus' 2025-11-01 13:41:45.462448 | orchestrator | + echo 2025-11-01 13:41:45.462460 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-11-01 13:41:45.535262 | orchestrator | Unauthorized 2025-11-01 13:41:45.541571 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-11-01 13:41:45.607863 | orchestrator | Unauthorized 2025-11-01 13:41:45.613139 | orchestrator | 2025-11-01 13:41:45.613166 | orchestrator | # Status of RabbitMQ 2025-11-01 13:41:45.613178 | orchestrator | 2025-11-01 13:41:45.613189 | orchestrator | + echo 2025-11-01 13:41:45.613200 | orchestrator | + echo '# Status of RabbitMQ' 2025-11-01 13:41:45.613211 | orchestrator | + echo 2025-11-01 13:41:45.613223 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-11-01 13:41:46.087663 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-11-01 13:41:46.097031 | orchestrator | 2025-11-01 13:41:46.097083 | orchestrator | # Status of Redis 2025-11-01 13:41:46.097095 | orchestrator | 2025-11-01 13:41:46.097106 | orchestrator | + echo 2025-11-01 13:41:46.097116 | orchestrator | + echo '# Status of Redis' 2025-11-01 13:41:46.097127 | orchestrator | + echo 2025-11-01 13:41:46.097139 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-11-01 13:41:46.105272 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001632s;;;0.000000;10.000000 2025-11-01 13:41:46.105751 | orchestrator | + popd 2025-11-01 13:41:46.105779 | orchestrator | 2025-11-01 13:41:46.105791 | orchestrator | # Create backup of MariaDB database 2025-11-01 13:41:46.105804 | orchestrator | 2025-11-01 13:41:46.105815 | orchestrator | + echo 2025-11-01 13:41:46.105826 | orchestrator | + echo '# Create backup of MariaDB database' 2025-11-01 13:41:46.105837 | orchestrator | + echo 2025-11-01 13:41:46.105848 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-11-01 13:41:48.412881 | orchestrator | 2025-11-01 13:41:48 | INFO  | Task 51fc9148-ebbb-4d05-a59a-60b14ff9e98b (mariadb_backup) was prepared for execution. 2025-11-01 13:41:48.412983 | orchestrator | 2025-11-01 13:41:48 | INFO  | It takes a moment until task 51fc9148-ebbb-4d05-a59a-60b14ff9e98b (mariadb_backup) has been started and output is visible here. 2025-11-01 13:43:19.049418 | orchestrator | 2025-11-01 13:43:19.049531 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-11-01 13:43:19.049544 | orchestrator | 2025-11-01 13:43:19.049553 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-11-01 13:43:19.049563 | orchestrator | Saturday 01 November 2025 13:41:53 +0000 (0:00:00.201) 0:00:00.201 ***** 2025-11-01 13:43:19.049571 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:43:19.049580 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:43:19.049599 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:43:19.050274 | orchestrator | 2025-11-01 13:43:19.050290 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-11-01 13:43:19.050300 | orchestrator | Saturday 01 November 2025 13:41:53 +0000 (0:00:00.395) 0:00:00.597 ***** 2025-11-01 13:43:19.050308 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-11-01 13:43:19.050317 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-11-01 13:43:19.050325 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-11-01 13:43:19.050333 | orchestrator | 2025-11-01 13:43:19.050355 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-11-01 13:43:19.050364 | orchestrator | 2025-11-01 13:43:19.050371 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-11-01 13:43:19.050379 | orchestrator | Saturday 01 November 2025 13:41:54 +0000 (0:00:00.671) 0:00:01.268 ***** 2025-11-01 13:43:19.050388 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-11-01 13:43:19.050396 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-11-01 13:43:19.050404 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-11-01 13:43:19.050412 | orchestrator | 2025-11-01 13:43:19.050420 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-11-01 13:43:19.050428 | orchestrator | Saturday 01 November 2025 13:41:54 +0000 (0:00:00.445) 0:00:01.714 ***** 2025-11-01 13:43:19.050436 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-11-01 13:43:19.050445 | orchestrator | 2025-11-01 13:43:19.050453 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-11-01 13:43:19.050461 | orchestrator | Saturday 01 November 2025 13:41:55 +0000 (0:00:00.618) 0:00:02.332 ***** 2025-11-01 13:43:19.050469 | orchestrator | ok: [testbed-node-1] 2025-11-01 13:43:19.050477 | orchestrator | ok: [testbed-node-0] 2025-11-01 13:43:19.050485 | orchestrator | ok: [testbed-node-2] 2025-11-01 13:43:19.050492 | orchestrator | 2025-11-01 13:43:19.050514 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-11-01 13:43:19.050522 | orchestrator | Saturday 01 November 2025 13:41:59 +0000 (0:00:03.674) 0:00:06.006 ***** 2025-11-01 13:43:19.050530 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-11-01 13:43:19.050538 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-11-01 13:43:19.050575 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-11-01 13:43:19.050584 | orchestrator | mariadb_bootstrap_restart 2025-11-01 13:43:19.050592 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:43:19.050599 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:43:19.050607 | orchestrator | changed: [testbed-node-0] 2025-11-01 13:43:19.050615 | orchestrator | 2025-11-01 13:43:19.050623 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-11-01 13:43:19.050630 | orchestrator | skipping: no hosts matched 2025-11-01 13:43:19.050638 | orchestrator | 2025-11-01 13:43:19.050646 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-11-01 13:43:19.050653 | orchestrator | skipping: no hosts matched 2025-11-01 13:43:19.050661 | orchestrator | 2025-11-01 13:43:19.050669 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-11-01 13:43:19.050676 | orchestrator | skipping: no hosts matched 2025-11-01 13:43:19.050684 | orchestrator | 2025-11-01 13:43:19.050692 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-11-01 13:43:19.050700 | orchestrator | 2025-11-01 13:43:19.050707 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-11-01 13:43:19.050715 | orchestrator | Saturday 01 November 2025 13:43:17 +0000 (0:01:18.572) 0:01:24.579 ***** 2025-11-01 13:43:19.050723 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:43:19.050731 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:43:19.050738 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:43:19.050746 | orchestrator | 2025-11-01 13:43:19.050754 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-11-01 13:43:19.050762 | orchestrator | Saturday 01 November 2025 13:43:18 +0000 (0:00:00.349) 0:01:24.928 ***** 2025-11-01 13:43:19.050769 | orchestrator | skipping: [testbed-node-0] 2025-11-01 13:43:19.050777 | orchestrator | skipping: [testbed-node-1] 2025-11-01 13:43:19.050785 | orchestrator | skipping: [testbed-node-2] 2025-11-01 13:43:19.050792 | orchestrator | 2025-11-01 13:43:19.050800 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:43:19.050809 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-11-01 13:43:19.050819 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-01 13:43:19.050827 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-11-01 13:43:19.050835 | orchestrator | 2025-11-01 13:43:19.050843 | orchestrator | 2025-11-01 13:43:19.050851 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:43:19.050858 | orchestrator | Saturday 01 November 2025 13:43:18 +0000 (0:00:00.502) 0:01:25.430 ***** 2025-11-01 13:43:19.050866 | orchestrator | =============================================================================== 2025-11-01 13:43:19.050874 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 78.57s 2025-11-01 13:43:19.050898 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.67s 2025-11-01 13:43:19.050907 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.67s 2025-11-01 13:43:19.050915 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.62s 2025-11-01 13:43:19.050922 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.50s 2025-11-01 13:43:19.050930 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.45s 2025-11-01 13:43:19.050937 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2025-11-01 13:43:19.050945 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.35s 2025-11-01 13:43:19.444293 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-11-01 13:43:19.452446 | orchestrator | + set -e 2025-11-01 13:43:19.452479 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-11-01 13:43:19.452492 | orchestrator | ++ export INTERACTIVE=false 2025-11-01 13:43:19.452503 | orchestrator | ++ INTERACTIVE=false 2025-11-01 13:43:19.452520 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-11-01 13:43:19.452531 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-11-01 13:43:19.452542 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-11-01 13:43:19.453508 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-11-01 13:43:19.457418 | orchestrator | 2025-11-01 13:43:19.457445 | orchestrator | # OpenStack endpoints 2025-11-01 13:43:19.457457 | orchestrator | 2025-11-01 13:43:19.457468 | orchestrator | ++ export MANAGER_VERSION=latest 2025-11-01 13:43:19.457479 | orchestrator | ++ MANAGER_VERSION=latest 2025-11-01 13:43:19.457489 | orchestrator | + export OS_CLOUD=admin 2025-11-01 13:43:19.457500 | orchestrator | + OS_CLOUD=admin 2025-11-01 13:43:19.457510 | orchestrator | + echo 2025-11-01 13:43:19.457521 | orchestrator | + echo '# OpenStack endpoints' 2025-11-01 13:43:19.457531 | orchestrator | + echo 2025-11-01 13:43:19.457542 | orchestrator | + openstack endpoint list 2025-11-01 13:43:23.510688 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-11-01 13:43:23.510781 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-11-01 13:43:23.510795 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-11-01 13:43:23.510823 | orchestrator | | 056a37bfc79c4aab97948ce66deb8750 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-11-01 13:43:23.510834 | orchestrator | | 1031f675fea842ed9f46ebcf565f4ab6 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-11-01 13:43:23.510845 | orchestrator | | 1ea88521c9814240b80e8b2ddc906f5f | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-11-01 13:43:23.510855 | orchestrator | | 2e6809b91f4a451588d76bef6e4a440e | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-11-01 13:43:23.510866 | orchestrator | | 36f6f264354646708d29c8ef4baa87e1 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-11-01 13:43:23.510876 | orchestrator | | 374ca16cb992420db22209f9be182cb3 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-11-01 13:43:23.510887 | orchestrator | | 5076842503304a54ab5034fd5a7d52e4 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-11-01 13:43:23.510897 | orchestrator | | 529a6d861cc846c2bde10233f973a4cf | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-11-01 13:43:23.510908 | orchestrator | | 560f1a07936745258df81f5cd0e1e265 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-11-01 13:43:23.510918 | orchestrator | | 5f6a3f7781b44c6a9f2820e4eb8b0ce8 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-11-01 13:43:23.510929 | orchestrator | | 655787e28f154fb38bb688bcb2926bac | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-11-01 13:43:23.510939 | orchestrator | | 69ae1dff3b064103b3429523695486f9 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-11-01 13:43:23.510969 | orchestrator | | 83cf33528fdc464b86df28507366f2b4 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-11-01 13:43:23.510980 | orchestrator | | 9106bbf0962342c7a1dd84212330ec25 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-11-01 13:43:23.510990 | orchestrator | | 9e0de9f3347d4ad781328d1fe3252a07 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-11-01 13:43:23.511001 | orchestrator | | a8b4441117484fbc8a87c872b5191019 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-11-01 13:43:23.511012 | orchestrator | | b2bc7c6a18944650b8aa83b0a0e611d6 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-11-01 13:43:23.511022 | orchestrator | | b3f7e153c3c54480a815e2264b26151c | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-11-01 13:43:23.511033 | orchestrator | | bb59218878e44c36bcaa3deaa18b8af3 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-11-01 13:43:23.511043 | orchestrator | | bf82c9024ef24007ae152154c1bdde84 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-11-01 13:43:23.511070 | orchestrator | | e43adbdcad12441eb45c829dc776db3a | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-11-01 13:43:23.511082 | orchestrator | | f2d742e308564090be566f624fa35553 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-11-01 13:43:23.511092 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-11-01 13:43:23.830613 | orchestrator | 2025-11-01 13:43:23.830677 | orchestrator | # Cinder 2025-11-01 13:43:23.830689 | orchestrator | 2025-11-01 13:43:23.830701 | orchestrator | + echo 2025-11-01 13:43:23.830711 | orchestrator | + echo '# Cinder' 2025-11-01 13:43:23.830722 | orchestrator | + echo 2025-11-01 13:43:23.830733 | orchestrator | + openstack volume service list 2025-11-01 13:43:26.784145 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-11-01 13:43:26.784247 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-11-01 13:43:26.784263 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-11-01 13:43:26.784274 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-11-01T13:43:23.000000 | 2025-11-01 13:43:26.784285 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-11-01T13:43:25.000000 | 2025-11-01 13:43:26.784296 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-11-01T13:43:25.000000 | 2025-11-01 13:43:26.784306 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-11-01T13:43:17.000000 | 2025-11-01 13:43:26.784317 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-11-01T13:43:18.000000 | 2025-11-01 13:43:26.784327 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-11-01T13:43:19.000000 | 2025-11-01 13:43:26.784377 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-11-01T13:43:17.000000 | 2025-11-01 13:43:26.784409 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-11-01T13:43:17.000000 | 2025-11-01 13:43:26.784421 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-11-01T13:43:18.000000 | 2025-11-01 13:43:26.784454 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-11-01 13:43:27.153991 | orchestrator | 2025-11-01 13:43:27.154116 | orchestrator | # Neutron 2025-11-01 13:43:27.154130 | orchestrator | 2025-11-01 13:43:27.154141 | orchestrator | + echo 2025-11-01 13:43:27.154153 | orchestrator | + echo '# Neutron' 2025-11-01 13:43:27.154164 | orchestrator | + echo 2025-11-01 13:43:27.154175 | orchestrator | + openstack network agent list 2025-11-01 13:43:30.038491 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-11-01 13:43:30.038589 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-11-01 13:43:30.038602 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-11-01 13:43:30.038613 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-11-01 13:43:30.038623 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-11-01 13:43:30.038633 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-11-01 13:43:30.038642 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-11-01 13:43:30.038652 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-11-01 13:43:30.038661 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-11-01 13:43:30.038671 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-11-01 13:43:30.038680 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-11-01 13:43:30.038690 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-11-01 13:43:30.038699 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-11-01 13:43:30.402220 | orchestrator | + openstack network service provider list 2025-11-01 13:43:33.333136 | orchestrator | +---------------+------+---------+ 2025-11-01 13:43:33.333227 | orchestrator | | Service Type | Name | Default | 2025-11-01 13:43:33.333241 | orchestrator | +---------------+------+---------+ 2025-11-01 13:43:33.333253 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-11-01 13:43:33.333264 | orchestrator | +---------------+------+---------+ 2025-11-01 13:43:33.684357 | orchestrator | 2025-11-01 13:43:33.684425 | orchestrator | # Nova 2025-11-01 13:43:33.684438 | orchestrator | 2025-11-01 13:43:33.684449 | orchestrator | + echo 2025-11-01 13:43:33.684460 | orchestrator | + echo '# Nova' 2025-11-01 13:43:33.684471 | orchestrator | + echo 2025-11-01 13:43:33.684483 | orchestrator | + openstack compute service list 2025-11-01 13:43:36.590295 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-11-01 13:43:36.590430 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-11-01 13:43:36.590462 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-11-01 13:43:36.590474 | orchestrator | | 30701eab-8264-42f7-8895-8b79c1c8a834 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-11-01T13:43:32.000000 | 2025-11-01 13:43:36.590507 | orchestrator | | e7c1a03b-faa1-49f1-9e33-3bfa1c6e0940 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-11-01T13:43:33.000000 | 2025-11-01 13:43:36.590520 | orchestrator | | 268c2193-f73b-45bd-9a7f-1ecc4632b002 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-11-01T13:43:26.000000 | 2025-11-01 13:43:36.590530 | orchestrator | | 448c117e-ce4b-4bec-b2dc-61f4b516c42a | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-11-01T13:43:26.000000 | 2025-11-01 13:43:36.590541 | orchestrator | | 351ee670-5c26-4fc1-b3fa-77f27f27f990 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-11-01T13:43:27.000000 | 2025-11-01 13:43:36.590552 | orchestrator | | ed3f50b8-42d7-4d5b-b002-9aabfe9bf506 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-11-01T13:43:32.000000 | 2025-11-01 13:43:36.590562 | orchestrator | | 7fc023b2-a912-4979-8056-2c577883c70d | nova-compute | testbed-node-4 | nova | enabled | up | 2025-11-01T13:43:32.000000 | 2025-11-01 13:43:36.590573 | orchestrator | | 814cfb0a-b079-4ba5-aceb-14a2011c4f3f | nova-compute | testbed-node-3 | nova | enabled | up | 2025-11-01T13:43:33.000000 | 2025-11-01 13:43:36.590583 | orchestrator | | a17afe35-3a6e-482c-a7f1-b8e0155f21dc | nova-compute | testbed-node-5 | nova | enabled | up | 2025-11-01T13:43:33.000000 | 2025-11-01 13:43:36.590594 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-11-01 13:43:36.952116 | orchestrator | + openstack hypervisor list 2025-11-01 13:43:39.893281 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-11-01 13:43:39.893408 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-11-01 13:43:39.893424 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-11-01 13:43:39.893435 | orchestrator | | ddab53cc-5274-481e-a848-ea7dbfdd7805 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-11-01 13:43:39.893446 | orchestrator | | cf7eea39-ef8d-4c43-883a-1b0d17aafb6f | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-11-01 13:43:39.893457 | orchestrator | | ec30569a-129f-4fa9-a8be-63f36fa28575 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-11-01 13:43:39.893468 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-11-01 13:43:40.231835 | orchestrator | 2025-11-01 13:43:40.231930 | orchestrator | # Run OpenStack test play 2025-11-01 13:43:40.231945 | orchestrator | + echo 2025-11-01 13:43:40.231957 | orchestrator | + echo '# Run OpenStack test play' 2025-11-01 13:43:40.232107 | orchestrator | 2025-11-01 13:43:40.232126 | orchestrator | + echo 2025-11-01 13:43:40.232137 | orchestrator | + osism apply --environment openstack test 2025-11-01 13:43:42.469802 | orchestrator | 2025-11-01 13:43:42 | INFO  | Trying to run play test in environment openstack 2025-11-01 13:43:52.645988 | orchestrator | 2025-11-01 13:43:52 | INFO  | Task 7d1cf4f4-3ac7-40a0-b699-c517c75012fc (test) was prepared for execution. 2025-11-01 13:43:52.646144 | orchestrator | 2025-11-01 13:43:52 | INFO  | It takes a moment until task 7d1cf4f4-3ac7-40a0-b699-c517c75012fc (test) has been started and output is visible here. 2025-11-01 13:51:17.096610 | orchestrator | 2025-11-01 13:51:17.096690 | orchestrator | PLAY [Create test project] ***************************************************** 2025-11-01 13:51:17.096704 | orchestrator | 2025-11-01 13:51:17.096716 | orchestrator | TASK [Create test domain] ****************************************************** 2025-11-01 13:51:17.096728 | orchestrator | Saturday 01 November 2025 13:43:57 +0000 (0:00:00.088) 0:00:00.088 ***** 2025-11-01 13:51:17.096739 | orchestrator | changed: [localhost] 2025-11-01 13:51:17.096750 | orchestrator | 2025-11-01 13:51:17.096761 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-11-01 13:51:17.096772 | orchestrator | Saturday 01 November 2025 13:44:01 +0000 (0:00:04.162) 0:00:04.250 ***** 2025-11-01 13:51:17.096783 | orchestrator | changed: [localhost] 2025-11-01 13:51:17.096818 | orchestrator | 2025-11-01 13:51:17.096830 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-11-01 13:51:17.096841 | orchestrator | Saturday 01 November 2025 13:44:06 +0000 (0:00:04.622) 0:00:08.873 ***** 2025-11-01 13:51:17.096851 | orchestrator | changed: [localhost] 2025-11-01 13:51:17.096862 | orchestrator | 2025-11-01 13:51:17.096873 | orchestrator | TASK [Create test project] ***************************************************** 2025-11-01 13:51:17.096883 | orchestrator | Saturday 01 November 2025 13:44:13 +0000 (0:00:07.197) 0:00:16.070 ***** 2025-11-01 13:51:17.096894 | orchestrator | changed: [localhost] 2025-11-01 13:51:17.096904 | orchestrator | 2025-11-01 13:51:17.096915 | orchestrator | TASK [Create test user] ******************************************************** 2025-11-01 13:51:17.096926 | orchestrator | Saturday 01 November 2025 13:44:18 +0000 (0:00:04.691) 0:00:20.762 ***** 2025-11-01 13:51:17.096936 | orchestrator | changed: [localhost] 2025-11-01 13:51:17.096947 | orchestrator | 2025-11-01 13:51:17.096958 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-11-01 13:51:17.096968 | orchestrator | Saturday 01 November 2025 13:44:22 +0000 (0:00:04.570) 0:00:25.332 ***** 2025-11-01 13:51:17.096979 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-11-01 13:51:17.096990 | orchestrator | changed: [localhost] => (item=member) 2025-11-01 13:51:17.097002 | orchestrator | changed: [localhost] => (item=creator) 2025-11-01 13:51:17.097013 | orchestrator | 2025-11-01 13:51:17.097036 | orchestrator | TASK [Create test server group] ************************************************ 2025-11-01 13:51:17.097048 | orchestrator | Saturday 01 November 2025 13:44:35 +0000 (0:00:13.244) 0:00:38.576 ***** 2025-11-01 13:51:17.097058 | orchestrator | changed: [localhost] 2025-11-01 13:51:17.097069 | orchestrator | 2025-11-01 13:51:17.097080 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-11-01 13:51:17.097090 | orchestrator | Saturday 01 November 2025 13:44:41 +0000 (0:00:05.210) 0:00:43.786 ***** 2025-11-01 13:51:17.097101 | orchestrator | changed: [localhost] 2025-11-01 13:51:17.097111 | orchestrator | 2025-11-01 13:51:17.097122 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-11-01 13:51:17.097132 | orchestrator | Saturday 01 November 2025 13:44:46 +0000 (0:00:05.369) 0:00:49.156 ***** 2025-11-01 13:51:17.097143 | orchestrator | changed: [localhost] 2025-11-01 13:51:17.097153 | orchestrator | 2025-11-01 13:51:17.097164 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-11-01 13:51:17.097177 | orchestrator | Saturday 01 November 2025 13:44:51 +0000 (0:00:04.809) 0:00:53.966 ***** 2025-11-01 13:51:17.097189 | orchestrator | changed: [localhost] 2025-11-01 13:51:17.097202 | orchestrator | 2025-11-01 13:51:17.097214 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-11-01 13:51:17.097226 | orchestrator | Saturday 01 November 2025 13:44:55 +0000 (0:00:04.211) 0:00:58.177 ***** 2025-11-01 13:51:17.097238 | orchestrator | changed: [localhost] 2025-11-01 13:51:17.097251 | orchestrator | 2025-11-01 13:51:17.097263 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-11-01 13:51:17.097275 | orchestrator | Saturday 01 November 2025 13:44:59 +0000 (0:00:04.434) 0:01:02.611 ***** 2025-11-01 13:51:17.097288 | orchestrator | changed: [localhost] 2025-11-01 13:51:17.097300 | orchestrator | 2025-11-01 13:51:17.097312 | orchestrator | TASK [Create test network topology] ******************************************** 2025-11-01 13:51:17.097325 | orchestrator | Saturday 01 November 2025 13:45:04 +0000 (0:00:04.608) 0:01:07.221 ***** 2025-11-01 13:51:17.097337 | orchestrator | changed: [localhost] 2025-11-01 13:51:17.097350 | orchestrator | 2025-11-01 13:51:17.097389 | orchestrator | TASK [Create test instances] *************************************************** 2025-11-01 13:51:17.097402 | orchestrator | Saturday 01 November 2025 13:45:22 +0000 (0:00:17.840) 0:01:25.061 ***** 2025-11-01 13:51:17.097414 | orchestrator | changed: [localhost] => (item=test) 2025-11-01 13:51:17.097427 | orchestrator | changed: [localhost] => (item=test-1) 2025-11-01 13:51:17.097439 | orchestrator | 2025-11-01 13:51:17.097451 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-11-01 13:51:17.097471 | orchestrator | 2025-11-01 13:51:17.097484 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-11-01 13:51:17.097497 | orchestrator | changed: [localhost] => (item=test-2) 2025-11-01 13:51:17.097509 | orchestrator | 2025-11-01 13:51:17.097521 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-11-01 13:51:17.097532 | orchestrator | 2025-11-01 13:51:17.097542 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-11-01 13:51:17.097553 | orchestrator | changed: [localhost] => (item=test-3) 2025-11-01 13:51:17.097564 | orchestrator | 2025-11-01 13:51:17.097575 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-11-01 13:51:17.097585 | orchestrator | changed: [localhost] => (item=test-4) 2025-11-01 13:51:17.097596 | orchestrator | 2025-11-01 13:51:17.097607 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-11-01 13:51:17.097617 | orchestrator | Saturday 01 November 2025 13:49:43 +0000 (0:04:21.414) 0:05:46.476 ***** 2025-11-01 13:51:17.097628 | orchestrator | changed: [localhost] => (item=test) 2025-11-01 13:51:17.097643 | orchestrator | changed: [localhost] => (item=test-1) 2025-11-01 13:51:17.097654 | orchestrator | changed: [localhost] => (item=test-2) 2025-11-01 13:51:17.097665 | orchestrator | changed: [localhost] => (item=test-3) 2025-11-01 13:51:17.097675 | orchestrator | changed: [localhost] => (item=test-4) 2025-11-01 13:51:17.097686 | orchestrator | 2025-11-01 13:51:17.097697 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-11-01 13:51:17.097724 | orchestrator | Saturday 01 November 2025 13:50:10 +0000 (0:00:26.612) 0:06:13.089 ***** 2025-11-01 13:51:17.097736 | orchestrator | changed: [localhost] => (item=test) 2025-11-01 13:51:17.097746 | orchestrator | changed: [localhost] => (item=test-1) 2025-11-01 13:51:17.097757 | orchestrator | changed: [localhost] => (item=test-2) 2025-11-01 13:51:17.097768 | orchestrator | changed: [localhost] => (item=test-3) 2025-11-01 13:51:17.097778 | orchestrator | changed: [localhost] => (item=test-4) 2025-11-01 13:51:17.097789 | orchestrator | 2025-11-01 13:51:17.097800 | orchestrator | TASK [Create test volume] ****************************************************** 2025-11-01 13:51:17.097810 | orchestrator | Saturday 01 November 2025 13:50:48 +0000 (0:00:38.165) 0:06:51.255 ***** 2025-11-01 13:51:17.097821 | orchestrator | changed: [localhost] 2025-11-01 13:51:17.097832 | orchestrator | 2025-11-01 13:51:17.097842 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-11-01 13:51:17.097853 | orchestrator | Saturday 01 November 2025 13:50:55 +0000 (0:00:07.009) 0:06:58.265 ***** 2025-11-01 13:51:17.097864 | orchestrator | changed: [localhost] 2025-11-01 13:51:17.097874 | orchestrator | 2025-11-01 13:51:17.097885 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-11-01 13:51:17.097896 | orchestrator | Saturday 01 November 2025 13:51:10 +0000 (0:00:15.334) 0:07:13.599 ***** 2025-11-01 13:51:17.097907 | orchestrator | ok: [localhost] 2025-11-01 13:51:17.097918 | orchestrator | 2025-11-01 13:51:17.097929 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-11-01 13:51:17.097940 | orchestrator | Saturday 01 November 2025 13:51:16 +0000 (0:00:05.737) 0:07:19.337 ***** 2025-11-01 13:51:17.097950 | orchestrator | ok: [localhost] => { 2025-11-01 13:51:17.097961 | orchestrator |  "msg": "192.168.112.191" 2025-11-01 13:51:17.097972 | orchestrator | } 2025-11-01 13:51:17.097983 | orchestrator | 2025-11-01 13:51:17.097994 | orchestrator | PLAY RECAP ********************************************************************* 2025-11-01 13:51:17.098005 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-11-01 13:51:17.098064 | orchestrator | 2025-11-01 13:51:17.098077 | orchestrator | 2025-11-01 13:51:17.098088 | orchestrator | TASKS RECAP ******************************************************************** 2025-11-01 13:51:17.098104 | orchestrator | Saturday 01 November 2025 13:51:16 +0000 (0:00:00.047) 0:07:19.384 ***** 2025-11-01 13:51:17.098126 | orchestrator | =============================================================================== 2025-11-01 13:51:17.098137 | orchestrator | Create test instances ------------------------------------------------- 261.41s 2025-11-01 13:51:17.098147 | orchestrator | Add tag to instances --------------------------------------------------- 38.17s 2025-11-01 13:51:17.098158 | orchestrator | Add metadata to instances ---------------------------------------------- 26.61s 2025-11-01 13:51:17.098169 | orchestrator | Create test network topology ------------------------------------------- 17.84s 2025-11-01 13:51:17.098179 | orchestrator | Attach test volume ----------------------------------------------------- 15.33s 2025-11-01 13:51:17.098190 | orchestrator | Add member roles to user test ------------------------------------------ 13.24s 2025-11-01 13:51:17.098201 | orchestrator | Add manager role to user test-admin ------------------------------------- 7.20s 2025-11-01 13:51:17.098211 | orchestrator | Create test volume ------------------------------------------------------ 7.01s 2025-11-01 13:51:17.098222 | orchestrator | Create floating ip address ---------------------------------------------- 5.74s 2025-11-01 13:51:17.098233 | orchestrator | Create ssh security group ----------------------------------------------- 5.37s 2025-11-01 13:51:17.098243 | orchestrator | Create test server group ------------------------------------------------ 5.21s 2025-11-01 13:51:17.098254 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.81s 2025-11-01 13:51:17.098265 | orchestrator | Create test project ----------------------------------------------------- 4.69s 2025-11-01 13:51:17.098275 | orchestrator | Create test-admin user -------------------------------------------------- 4.62s 2025-11-01 13:51:17.098286 | orchestrator | Create test keypair ----------------------------------------------------- 4.61s 2025-11-01 13:51:17.098297 | orchestrator | Create test user -------------------------------------------------------- 4.57s 2025-11-01 13:51:17.098307 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.43s 2025-11-01 13:51:17.098318 | orchestrator | Create icmp security group ---------------------------------------------- 4.21s 2025-11-01 13:51:17.098329 | orchestrator | Create test domain ------------------------------------------------------ 4.16s 2025-11-01 13:51:17.098339 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-11-01 13:51:17.513079 | orchestrator | + server_list 2025-11-01 13:51:17.513117 | orchestrator | + openstack --os-cloud test server list 2025-11-01 13:51:22.359287 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-11-01 13:51:22.359432 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-11-01 13:51:22.359447 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-11-01 13:51:22.359460 | orchestrator | | fda44fa8-39d6-4af4-bfff-5d701aca2409 | test-4 | ACTIVE | auto_allocated_network=10.42.0.13, 192.168.112.178 | N/A (booted from volume) | SCS-1L-1 | 2025-11-01 13:51:22.359471 | orchestrator | | 9f23c948-7478-46f1-803b-e4f684818cac | test-3 | ACTIVE | auto_allocated_network=10.42.0.24, 192.168.112.195 | N/A (booted from volume) | SCS-1L-1 | 2025-11-01 13:51:22.359482 | orchestrator | | 08792d1f-7ae1-4479-827e-6a51e5fce1f3 | test-2 | ACTIVE | auto_allocated_network=10.42.0.17, 192.168.112.155 | N/A (booted from volume) | SCS-1L-1 | 2025-11-01 13:51:22.359493 | orchestrator | | 72ff0dc5-6971-46cf-8fab-52070155c371 | test-1 | ACTIVE | auto_allocated_network=10.42.0.58, 192.168.112.112 | N/A (booted from volume) | SCS-1L-1 | 2025-11-01 13:51:22.359503 | orchestrator | | 526cbb81-0636-430e-8995-1d1af38f3cb2 | test | ACTIVE | auto_allocated_network=10.42.0.29, 192.168.112.191 | N/A (booted from volume) | SCS-1L-1 | 2025-11-01 13:51:22.359514 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-11-01 13:51:22.695496 | orchestrator | + openstack --os-cloud test server show test 2025-11-01 13:51:26.263294 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 13:51:26.263419 | orchestrator | | Field | Value | 2025-11-01 13:51:26.263432 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 13:51:26.263441 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-11-01 13:51:26.263449 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-11-01 13:51:26.263457 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-11-01 13:51:26.263465 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-11-01 13:51:26.263473 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-11-01 13:51:26.263481 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-11-01 13:51:26.263508 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-11-01 13:51:26.263532 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-11-01 13:51:26.263541 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-11-01 13:51:26.263551 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-11-01 13:51:26.263559 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-11-01 13:51:26.263567 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-11-01 13:51:26.263575 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-11-01 13:51:26.263583 | orchestrator | | OS-EXT-STS:task_state | None | 2025-11-01 13:51:26.263591 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-11-01 13:51:26.263599 | orchestrator | | OS-SRV-USG:launched_at | 2025-11-01T13:46:07.000000 | 2025-11-01 13:51:26.263617 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-11-01 13:51:26.263626 | orchestrator | | accessIPv4 | | 2025-11-01 13:51:26.263634 | orchestrator | | accessIPv6 | | 2025-11-01 13:51:26.263645 | orchestrator | | addresses | auto_allocated_network=10.42.0.29, 192.168.112.191 | 2025-11-01 13:51:26.263653 | orchestrator | | config_drive | | 2025-11-01 13:51:26.263661 | orchestrator | | created | 2025-11-01T13:45:31Z | 2025-11-01 13:51:26.263669 | orchestrator | | description | None | 2025-11-01 13:51:26.263677 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-11-01 13:51:26.263685 | orchestrator | | hostId | d57f54f07b11ccc1bc585c5eea1a38280787a238a857be05982cba4f | 2025-11-01 13:51:26.263697 | orchestrator | | host_status | None | 2025-11-01 13:51:26.263711 | orchestrator | | id | 526cbb81-0636-430e-8995-1d1af38f3cb2 | 2025-11-01 13:51:26.263720 | orchestrator | | image | N/A (booted from volume) | 2025-11-01 13:51:26.263731 | orchestrator | | key_name | test | 2025-11-01 13:51:26.263740 | orchestrator | | locked | False | 2025-11-01 13:51:26.263748 | orchestrator | | locked_reason | None | 2025-11-01 13:51:26.263756 | orchestrator | | name | test | 2025-11-01 13:51:26.263764 | orchestrator | | pinned_availability_zone | None | 2025-11-01 13:51:26.263772 | orchestrator | | progress | 0 | 2025-11-01 13:51:26.263784 | orchestrator | | project_id | 7de4630ff8e9432281714491ff6c86d5 | 2025-11-01 13:51:26.263792 | orchestrator | | properties | hostname='test' | 2025-11-01 13:51:26.263805 | orchestrator | | security_groups | name='icmp' | 2025-11-01 13:51:26.263814 | orchestrator | | | name='ssh' | 2025-11-01 13:51:26.263825 | orchestrator | | server_groups | None | 2025-11-01 13:51:26.263835 | orchestrator | | status | ACTIVE | 2025-11-01 13:51:26.263844 | orchestrator | | tags | test | 2025-11-01 13:51:26.263854 | orchestrator | | trusted_image_certificates | None | 2025-11-01 13:51:26.263863 | orchestrator | | updated | 2025-11-01T13:49:49Z | 2025-11-01 13:51:26.263872 | orchestrator | | user_id | bd3fc2b140d040d183cd3aaff2cb5d3a | 2025-11-01 13:51:26.263890 | orchestrator | | volumes_attached | delete_on_termination='True', id='52b29b80-8fdc-40bb-9c07-9d53172f08d4' | 2025-11-01 13:51:26.263899 | orchestrator | | | delete_on_termination='False', id='56fb2809-7b83-4b64-bc2c-942250c3bd51' | 2025-11-01 13:51:26.267878 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 13:51:26.612696 | orchestrator | + openstack --os-cloud test server show test-1 2025-11-01 13:51:30.191311 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 13:51:30.191456 | orchestrator | | Field | Value | 2025-11-01 13:51:30.191474 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 13:51:30.191486 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-11-01 13:51:30.191497 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-11-01 13:51:30.191508 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-11-01 13:51:30.191538 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-11-01 13:51:30.191550 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-11-01 13:51:30.191561 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-11-01 13:51:30.191600 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-11-01 13:51:30.191614 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-11-01 13:51:30.191629 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-11-01 13:51:30.191640 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-11-01 13:51:30.191651 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-11-01 13:51:30.191662 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-11-01 13:51:30.191680 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-11-01 13:51:30.191691 | orchestrator | | OS-EXT-STS:task_state | None | 2025-11-01 13:51:30.191702 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-11-01 13:51:30.191713 | orchestrator | | OS-SRV-USG:launched_at | 2025-11-01T13:47:04.000000 | 2025-11-01 13:51:30.191730 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-11-01 13:51:30.191742 | orchestrator | | accessIPv4 | | 2025-11-01 13:51:30.191758 | orchestrator | | accessIPv6 | | 2025-11-01 13:51:30.191769 | orchestrator | | addresses | auto_allocated_network=10.42.0.58, 192.168.112.112 | 2025-11-01 13:51:30.191780 | orchestrator | | config_drive | | 2025-11-01 13:51:30.191797 | orchestrator | | created | 2025-11-01T13:46:28Z | 2025-11-01 13:51:30.191808 | orchestrator | | description | None | 2025-11-01 13:51:30.191819 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-11-01 13:51:30.191831 | orchestrator | | hostId | 54b2101df0b2cc1ceecab647bf0a3c5dd279829d00318383207c3d7c | 2025-11-01 13:51:30.191842 | orchestrator | | host_status | None | 2025-11-01 13:51:30.191862 | orchestrator | | id | 72ff0dc5-6971-46cf-8fab-52070155c371 | 2025-11-01 13:51:30.191876 | orchestrator | | image | N/A (booted from volume) | 2025-11-01 13:51:30.191893 | orchestrator | | key_name | test | 2025-11-01 13:51:30.191906 | orchestrator | | locked | False | 2025-11-01 13:51:30.191919 | orchestrator | | locked_reason | None | 2025-11-01 13:51:30.191937 | orchestrator | | name | test-1 | 2025-11-01 13:51:30.191950 | orchestrator | | pinned_availability_zone | None | 2025-11-01 13:51:30.191963 | orchestrator | | progress | 0 | 2025-11-01 13:51:30.191976 | orchestrator | | project_id | 7de4630ff8e9432281714491ff6c86d5 | 2025-11-01 13:51:30.191988 | orchestrator | | properties | hostname='test-1' | 2025-11-01 13:51:30.192008 | orchestrator | | security_groups | name='icmp' | 2025-11-01 13:51:30.192021 | orchestrator | | | name='ssh' | 2025-11-01 13:51:30.192038 | orchestrator | | server_groups | None | 2025-11-01 13:51:30.192051 | orchestrator | | status | ACTIVE | 2025-11-01 13:51:30.192075 | orchestrator | | tags | test | 2025-11-01 13:51:30.192088 | orchestrator | | trusted_image_certificates | None | 2025-11-01 13:51:30.192101 | orchestrator | | updated | 2025-11-01T13:49:54Z | 2025-11-01 13:51:30.192112 | orchestrator | | user_id | bd3fc2b140d040d183cd3aaff2cb5d3a | 2025-11-01 13:51:30.192123 | orchestrator | | volumes_attached | delete_on_termination='True', id='7b4b735c-0787-409e-a568-270a00a7b941' | 2025-11-01 13:51:30.196121 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 13:51:30.546213 | orchestrator | + openstack --os-cloud test server show test-2 2025-11-01 13:51:33.896880 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 13:51:33.897002 | orchestrator | | Field | Value | 2025-11-01 13:51:33.897020 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 13:51:33.897055 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-11-01 13:51:33.897067 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-11-01 13:51:33.897078 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-11-01 13:51:33.897089 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-11-01 13:51:33.897100 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-11-01 13:51:33.897111 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-11-01 13:51:33.897141 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-11-01 13:51:33.897153 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-11-01 13:51:33.897164 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-11-01 13:51:33.897183 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-11-01 13:51:33.897194 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-11-01 13:51:33.897210 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-11-01 13:51:33.897231 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-11-01 13:51:33.897261 | orchestrator | | OS-EXT-STS:task_state | None | 2025-11-01 13:51:33.897304 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-11-01 13:51:33.897317 | orchestrator | | OS-SRV-USG:launched_at | 2025-11-01T13:48:00.000000 | 2025-11-01 13:51:33.897337 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-11-01 13:51:33.897349 | orchestrator | | accessIPv4 | | 2025-11-01 13:51:33.897426 | orchestrator | | accessIPv6 | | 2025-11-01 13:51:33.897442 | orchestrator | | addresses | auto_allocated_network=10.42.0.17, 192.168.112.155 | 2025-11-01 13:51:33.897455 | orchestrator | | config_drive | | 2025-11-01 13:51:33.897468 | orchestrator | | created | 2025-11-01T13:47:25Z | 2025-11-01 13:51:33.897481 | orchestrator | | description | None | 2025-11-01 13:51:33.897493 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-11-01 13:51:33.897506 | orchestrator | | hostId | b2374d55583e175f2b407ef3527fb93249b8a186ed6cac9d3da0d15c | 2025-11-01 13:51:33.897519 | orchestrator | | host_status | None | 2025-11-01 13:51:33.897541 | orchestrator | | id | 08792d1f-7ae1-4479-827e-6a51e5fce1f3 | 2025-11-01 13:51:33.897554 | orchestrator | | image | N/A (booted from volume) | 2025-11-01 13:51:33.897579 | orchestrator | | key_name | test | 2025-11-01 13:51:33.897592 | orchestrator | | locked | False | 2025-11-01 13:51:33.897605 | orchestrator | | locked_reason | None | 2025-11-01 13:51:33.897618 | orchestrator | | name | test-2 | 2025-11-01 13:51:33.897630 | orchestrator | | pinned_availability_zone | None | 2025-11-01 13:51:33.897643 | orchestrator | | progress | 0 | 2025-11-01 13:51:33.897655 | orchestrator | | project_id | 7de4630ff8e9432281714491ff6c86d5 | 2025-11-01 13:51:33.897668 | orchestrator | | properties | hostname='test-2' | 2025-11-01 13:51:33.897688 | orchestrator | | security_groups | name='icmp' | 2025-11-01 13:51:33.897710 | orchestrator | | | name='ssh' | 2025-11-01 13:51:33.897727 | orchestrator | | server_groups | None | 2025-11-01 13:51:33.897741 | orchestrator | | status | ACTIVE | 2025-11-01 13:51:33.897754 | orchestrator | | tags | test | 2025-11-01 13:51:33.897765 | orchestrator | | trusted_image_certificates | None | 2025-11-01 13:51:33.897777 | orchestrator | | updated | 2025-11-01T13:49:59Z | 2025-11-01 13:51:33.897787 | orchestrator | | user_id | bd3fc2b140d040d183cd3aaff2cb5d3a | 2025-11-01 13:51:33.897798 | orchestrator | | volumes_attached | delete_on_termination='True', id='387c6f6b-3186-461b-b1b8-3246aa8d1d04' | 2025-11-01 13:51:33.905197 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 13:51:34.285231 | orchestrator | + openstack --os-cloud test server show test-3 2025-11-01 13:51:37.810147 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 13:51:37.810251 | orchestrator | | Field | Value | 2025-11-01 13:51:37.810277 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 13:51:37.810289 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-11-01 13:51:37.810301 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-11-01 13:51:37.810312 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-11-01 13:51:37.810323 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-11-01 13:51:37.810334 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-11-01 13:51:37.810345 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-11-01 13:51:37.810422 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-11-01 13:51:37.810437 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-11-01 13:51:37.810449 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-11-01 13:51:37.810465 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-11-01 13:51:37.810477 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-11-01 13:51:37.810489 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-11-01 13:51:37.810500 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-11-01 13:51:37.810511 | orchestrator | | OS-EXT-STS:task_state | None | 2025-11-01 13:51:37.810523 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-11-01 13:51:37.810544 | orchestrator | | OS-SRV-USG:launched_at | 2025-11-01T13:48:47.000000 | 2025-11-01 13:51:37.810563 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-11-01 13:51:37.810574 | orchestrator | | accessIPv4 | | 2025-11-01 13:51:37.810586 | orchestrator | | accessIPv6 | | 2025-11-01 13:51:37.810598 | orchestrator | | addresses | auto_allocated_network=10.42.0.24, 192.168.112.195 | 2025-11-01 13:51:37.810610 | orchestrator | | config_drive | | 2025-11-01 13:51:37.810624 | orchestrator | | created | 2025-11-01T13:48:21Z | 2025-11-01 13:51:37.810637 | orchestrator | | description | None | 2025-11-01 13:51:37.810650 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-11-01 13:51:37.810663 | orchestrator | | hostId | 54b2101df0b2cc1ceecab647bf0a3c5dd279829d00318383207c3d7c | 2025-11-01 13:51:37.810681 | orchestrator | | host_status | None | 2025-11-01 13:51:37.810701 | orchestrator | | id | 9f23c948-7478-46f1-803b-e4f684818cac | 2025-11-01 13:51:37.810721 | orchestrator | | image | N/A (booted from volume) | 2025-11-01 13:51:37.810738 | orchestrator | | key_name | test | 2025-11-01 13:51:37.810751 | orchestrator | | locked | False | 2025-11-01 13:51:37.810763 | orchestrator | | locked_reason | None | 2025-11-01 13:51:37.810775 | orchestrator | | name | test-3 | 2025-11-01 13:51:37.810788 | orchestrator | | pinned_availability_zone | None | 2025-11-01 13:51:37.810801 | orchestrator | | progress | 0 | 2025-11-01 13:51:37.810819 | orchestrator | | project_id | 7de4630ff8e9432281714491ff6c86d5 | 2025-11-01 13:51:37.810832 | orchestrator | | properties | hostname='test-3' | 2025-11-01 13:51:37.810852 | orchestrator | | security_groups | name='icmp' | 2025-11-01 13:51:37.810865 | orchestrator | | | name='ssh' | 2025-11-01 13:51:37.810882 | orchestrator | | server_groups | None | 2025-11-01 13:51:37.810895 | orchestrator | | status | ACTIVE | 2025-11-01 13:51:37.810908 | orchestrator | | tags | test | 2025-11-01 13:51:37.810921 | orchestrator | | trusted_image_certificates | None | 2025-11-01 13:51:37.810933 | orchestrator | | updated | 2025-11-01T13:50:04Z | 2025-11-01 13:51:37.810952 | orchestrator | | user_id | bd3fc2b140d040d183cd3aaff2cb5d3a | 2025-11-01 13:51:37.810965 | orchestrator | | volumes_attached | delete_on_termination='True', id='6a333bf9-7ade-45ea-97a7-e55c687614f8' | 2025-11-01 13:51:37.813556 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 13:51:38.196105 | orchestrator | + openstack --os-cloud test server show test-4 2025-11-01 13:51:41.428202 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 13:51:41.428285 | orchestrator | | Field | Value | 2025-11-01 13:51:41.428306 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 13:51:41.428314 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-11-01 13:51:41.428320 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-11-01 13:51:41.428326 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-11-01 13:51:41.428347 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-11-01 13:51:41.428354 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-11-01 13:51:41.428386 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-11-01 13:51:41.428406 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-11-01 13:51:41.428413 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-11-01 13:51:41.428420 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-11-01 13:51:41.428429 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-11-01 13:51:41.428436 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-11-01 13:51:41.428443 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-11-01 13:51:41.428454 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-11-01 13:51:41.428460 | orchestrator | | OS-EXT-STS:task_state | None | 2025-11-01 13:51:41.428467 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-11-01 13:51:41.428473 | orchestrator | | OS-SRV-USG:launched_at | 2025-11-01T13:49:31.000000 | 2025-11-01 13:51:41.428484 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-11-01 13:51:41.428490 | orchestrator | | accessIPv4 | | 2025-11-01 13:51:41.428497 | orchestrator | | accessIPv6 | | 2025-11-01 13:51:41.428506 | orchestrator | | addresses | auto_allocated_network=10.42.0.13, 192.168.112.178 | 2025-11-01 13:51:41.428512 | orchestrator | | config_drive | | 2025-11-01 13:51:41.428519 | orchestrator | | created | 2025-11-01T13:49:05Z | 2025-11-01 13:51:41.428529 | orchestrator | | description | None | 2025-11-01 13:51:41.428536 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-11-01 13:51:41.428542 | orchestrator | | hostId | b2374d55583e175f2b407ef3527fb93249b8a186ed6cac9d3da0d15c | 2025-11-01 13:51:41.428548 | orchestrator | | host_status | None | 2025-11-01 13:51:41.428559 | orchestrator | | id | fda44fa8-39d6-4af4-bfff-5d701aca2409 | 2025-11-01 13:51:41.428566 | orchestrator | | image | N/A (booted from volume) | 2025-11-01 13:51:41.428572 | orchestrator | | key_name | test | 2025-11-01 13:51:41.428581 | orchestrator | | locked | False | 2025-11-01 13:51:41.428587 | orchestrator | | locked_reason | None | 2025-11-01 13:51:41.428599 | orchestrator | | name | test-4 | 2025-11-01 13:51:41.428605 | orchestrator | | pinned_availability_zone | None | 2025-11-01 13:51:41.428611 | orchestrator | | progress | 0 | 2025-11-01 13:51:41.428618 | orchestrator | | project_id | 7de4630ff8e9432281714491ff6c86d5 | 2025-11-01 13:51:41.428624 | orchestrator | | properties | hostname='test-4' | 2025-11-01 13:51:41.428635 | orchestrator | | security_groups | name='icmp' | 2025-11-01 13:51:41.428641 | orchestrator | | | name='ssh' | 2025-11-01 13:51:41.428647 | orchestrator | | server_groups | None | 2025-11-01 13:51:41.428657 | orchestrator | | status | ACTIVE | 2025-11-01 13:51:41.428668 | orchestrator | | tags | test | 2025-11-01 13:51:41.428674 | orchestrator | | trusted_image_certificates | None | 2025-11-01 13:51:41.428681 | orchestrator | | updated | 2025-11-01T13:50:10Z | 2025-11-01 13:51:41.428687 | orchestrator | | user_id | bd3fc2b140d040d183cd3aaff2cb5d3a | 2025-11-01 13:51:41.428694 | orchestrator | | volumes_attached | delete_on_termination='True', id='4bd2006f-061f-4a3a-b9ea-6551536dfa60' | 2025-11-01 13:51:41.436383 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-11-01 13:51:41.781262 | orchestrator | + server_ping 2025-11-01 13:51:41.783066 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-11-01 13:51:41.783098 | orchestrator | ++ tr -d '\r' 2025-11-01 13:51:44.934136 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 13:51:44.934200 | orchestrator | + ping -c3 192.168.112.155 2025-11-01 13:51:44.950609 | orchestrator | PING 192.168.112.155 (192.168.112.155) 56(84) bytes of data. 2025-11-01 13:51:44.950640 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=1 ttl=63 time=9.34 ms 2025-11-01 13:51:45.945811 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=2 ttl=63 time=3.45 ms 2025-11-01 13:51:46.945804 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=3 ttl=63 time=2.03 ms 2025-11-01 13:51:46.945893 | orchestrator | 2025-11-01 13:51:46.945910 | orchestrator | --- 192.168.112.155 ping statistics --- 2025-11-01 13:51:46.945923 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-11-01 13:51:46.945934 | orchestrator | rtt min/avg/max/mdev = 2.025/4.937/9.338/3.165 ms 2025-11-01 13:51:46.945946 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 13:51:46.945958 | orchestrator | + ping -c3 192.168.112.112 2025-11-01 13:51:46.960028 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2025-11-01 13:51:46.960079 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=9.27 ms 2025-11-01 13:51:47.955771 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=3.10 ms 2025-11-01 13:51:48.956213 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=2.08 ms 2025-11-01 13:51:48.956305 | orchestrator | 2025-11-01 13:51:48.956320 | orchestrator | --- 192.168.112.112 ping statistics --- 2025-11-01 13:51:48.956332 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 13:51:48.956343 | orchestrator | rtt min/avg/max/mdev = 2.076/4.816/9.270/3.177 ms 2025-11-01 13:51:48.956826 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 13:51:48.956852 | orchestrator | + ping -c3 192.168.112.195 2025-11-01 13:51:48.968214 | orchestrator | PING 192.168.112.195 (192.168.112.195) 56(84) bytes of data. 2025-11-01 13:51:48.968711 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=1 ttl=63 time=8.76 ms 2025-11-01 13:51:49.964150 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=2 ttl=63 time=2.36 ms 2025-11-01 13:51:50.966459 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=3 ttl=63 time=2.27 ms 2025-11-01 13:51:50.966543 | orchestrator | 2025-11-01 13:51:50.966557 | orchestrator | --- 192.168.112.195 ping statistics --- 2025-11-01 13:51:50.966569 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-11-01 13:51:50.966580 | orchestrator | rtt min/avg/max/mdev = 2.271/4.463/8.757/3.036 ms 2025-11-01 13:51:50.966592 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 13:51:50.966603 | orchestrator | + ping -c3 192.168.112.178 2025-11-01 13:51:50.973858 | orchestrator | PING 192.168.112.178 (192.168.112.178) 56(84) bytes of data. 2025-11-01 13:51:50.973883 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=1 ttl=63 time=4.50 ms 2025-11-01 13:51:51.973933 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=2 ttl=63 time=2.54 ms 2025-11-01 13:51:52.975256 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=3 ttl=63 time=1.70 ms 2025-11-01 13:51:52.975343 | orchestrator | 2025-11-01 13:51:52.975406 | orchestrator | --- 192.168.112.178 ping statistics --- 2025-11-01 13:51:52.975420 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-11-01 13:51:52.975431 | orchestrator | rtt min/avg/max/mdev = 1.699/2.913/4.500/1.173 ms 2025-11-01 13:51:52.975453 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 13:51:52.975465 | orchestrator | + ping -c3 192.168.112.191 2025-11-01 13:51:52.987846 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2025-11-01 13:51:52.987874 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=9.65 ms 2025-11-01 13:51:53.983446 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=3.09 ms 2025-11-01 13:51:54.984302 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=2.18 ms 2025-11-01 13:51:54.984424 | orchestrator | 2025-11-01 13:51:54.984438 | orchestrator | --- 192.168.112.191 ping statistics --- 2025-11-01 13:51:54.984449 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 13:51:54.984459 | orchestrator | rtt min/avg/max/mdev = 2.177/4.971/9.652/3.330 ms 2025-11-01 13:51:54.984841 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-11-01 13:51:54.984861 | orchestrator | + compute_list 2025-11-01 13:51:54.984872 | orchestrator | + osism manage compute list testbed-node-3 2025-11-01 13:51:58.879910 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 13:51:58.880003 | orchestrator | | ID | Name | Status | 2025-11-01 13:51:58.880015 | orchestrator | |--------------------------------------+--------+----------| 2025-11-01 13:51:58.880026 | orchestrator | | fda44fa8-39d6-4af4-bfff-5d701aca2409 | test-4 | ACTIVE | 2025-11-01 13:51:58.880037 | orchestrator | | 08792d1f-7ae1-4479-827e-6a51e5fce1f3 | test-2 | ACTIVE | 2025-11-01 13:51:58.880048 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 13:51:59.290116 | orchestrator | + osism manage compute list testbed-node-4 2025-11-01 13:52:03.089659 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 13:52:03.089771 | orchestrator | | ID | Name | Status | 2025-11-01 13:52:03.089818 | orchestrator | |--------------------------------------+--------+----------| 2025-11-01 13:52:03.089832 | orchestrator | | 9f23c948-7478-46f1-803b-e4f684818cac | test-3 | ACTIVE | 2025-11-01 13:52:03.089844 | orchestrator | | 72ff0dc5-6971-46cf-8fab-52070155c371 | test-1 | ACTIVE | 2025-11-01 13:52:03.089855 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 13:52:03.474426 | orchestrator | + osism manage compute list testbed-node-5 2025-11-01 13:52:07.169024 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 13:52:07.169121 | orchestrator | | ID | Name | Status | 2025-11-01 13:52:07.169137 | orchestrator | |--------------------------------------+--------+----------| 2025-11-01 13:52:07.169150 | orchestrator | | 526cbb81-0636-430e-8995-1d1af38f3cb2 | test | ACTIVE | 2025-11-01 13:52:07.169161 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 13:52:07.571040 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-11-01 13:52:11.169041 | orchestrator | 2025-11-01 13:52:11 | INFO  | Live migrating server 9f23c948-7478-46f1-803b-e4f684818cac 2025-11-01 13:52:24.625685 | orchestrator | 2025-11-01 13:52:24 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:52:27.098480 | orchestrator | 2025-11-01 13:52:27 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:52:29.501274 | orchestrator | 2025-11-01 13:52:29 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:52:31.887926 | orchestrator | 2025-11-01 13:52:31 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:52:34.559953 | orchestrator | 2025-11-01 13:52:34 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:52:36.859884 | orchestrator | 2025-11-01 13:52:36 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:52:39.183116 | orchestrator | 2025-11-01 13:52:39 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:52:41.497845 | orchestrator | 2025-11-01 13:52:41 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:52:43.850746 | orchestrator | 2025-11-01 13:52:43 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:52:46.175513 | orchestrator | 2025-11-01 13:52:46 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) completed with status ACTIVE 2025-11-01 13:52:46.175612 | orchestrator | 2025-11-01 13:52:46 | INFO  | Live migrating server 72ff0dc5-6971-46cf-8fab-52070155c371 2025-11-01 13:52:58.716321 | orchestrator | 2025-11-01 13:52:58 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:53:01.091241 | orchestrator | 2025-11-01 13:53:01 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:53:03.400992 | orchestrator | 2025-11-01 13:53:03 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:53:05.749229 | orchestrator | 2025-11-01 13:53:05 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:53:08.135487 | orchestrator | 2025-11-01 13:53:08 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:53:10.516478 | orchestrator | 2025-11-01 13:53:10 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:53:12.816996 | orchestrator | 2025-11-01 13:53:12 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:53:15.142978 | orchestrator | 2025-11-01 13:53:15 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:53:17.423193 | orchestrator | 2025-11-01 13:53:17 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:53:19.820680 | orchestrator | 2025-11-01 13:53:19 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) completed with status ACTIVE 2025-11-01 13:53:20.221850 | orchestrator | + compute_list 2025-11-01 13:53:20.221921 | orchestrator | + osism manage compute list testbed-node-3 2025-11-01 13:53:23.723784 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 13:53:23.723893 | orchestrator | | ID | Name | Status | 2025-11-01 13:53:23.723908 | orchestrator | |--------------------------------------+--------+----------| 2025-11-01 13:53:23.723919 | orchestrator | | fda44fa8-39d6-4af4-bfff-5d701aca2409 | test-4 | ACTIVE | 2025-11-01 13:53:23.723930 | orchestrator | | 9f23c948-7478-46f1-803b-e4f684818cac | test-3 | ACTIVE | 2025-11-01 13:53:23.723941 | orchestrator | | 08792d1f-7ae1-4479-827e-6a51e5fce1f3 | test-2 | ACTIVE | 2025-11-01 13:53:23.723952 | orchestrator | | 72ff0dc5-6971-46cf-8fab-52070155c371 | test-1 | ACTIVE | 2025-11-01 13:53:23.723963 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 13:53:24.163478 | orchestrator | + osism manage compute list testbed-node-4 2025-11-01 13:53:27.336792 | orchestrator | +------+--------+----------+ 2025-11-01 13:53:27.336890 | orchestrator | | ID | Name | Status | 2025-11-01 13:53:27.336905 | orchestrator | |------+--------+----------| 2025-11-01 13:53:27.336917 | orchestrator | +------+--------+----------+ 2025-11-01 13:53:27.703414 | orchestrator | + osism manage compute list testbed-node-5 2025-11-01 13:53:31.093697 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 13:53:31.093803 | orchestrator | | ID | Name | Status | 2025-11-01 13:53:31.093819 | orchestrator | |--------------------------------------+--------+----------| 2025-11-01 13:53:31.093831 | orchestrator | | 526cbb81-0636-430e-8995-1d1af38f3cb2 | test | ACTIVE | 2025-11-01 13:53:31.093861 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 13:53:31.494527 | orchestrator | + server_ping 2025-11-01 13:53:31.495593 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-11-01 13:53:31.495626 | orchestrator | ++ tr -d '\r' 2025-11-01 13:53:34.552467 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 13:53:34.552589 | orchestrator | + ping -c3 192.168.112.155 2025-11-01 13:53:34.563162 | orchestrator | PING 192.168.112.155 (192.168.112.155) 56(84) bytes of data. 2025-11-01 13:53:34.563224 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=1 ttl=63 time=9.06 ms 2025-11-01 13:53:35.558281 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=2 ttl=63 time=2.49 ms 2025-11-01 13:53:36.559831 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=3 ttl=63 time=2.05 ms 2025-11-01 13:53:36.559930 | orchestrator | 2025-11-01 13:53:36.559946 | orchestrator | --- 192.168.112.155 ping statistics --- 2025-11-01 13:53:36.559959 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 13:53:36.559970 | orchestrator | rtt min/avg/max/mdev = 2.046/4.532/9.063/3.208 ms 2025-11-01 13:53:36.561194 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 13:53:36.561218 | orchestrator | + ping -c3 192.168.112.112 2025-11-01 13:53:36.570534 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2025-11-01 13:53:36.570564 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=6.94 ms 2025-11-01 13:53:37.567743 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=2.45 ms 2025-11-01 13:53:38.569431 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=2.15 ms 2025-11-01 13:53:38.569527 | orchestrator | 2025-11-01 13:53:38.569543 | orchestrator | --- 192.168.112.112 ping statistics --- 2025-11-01 13:53:38.569555 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 13:53:38.569596 | orchestrator | rtt min/avg/max/mdev = 2.149/3.846/6.936/2.188 ms 2025-11-01 13:53:38.570158 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 13:53:38.570185 | orchestrator | + ping -c3 192.168.112.195 2025-11-01 13:53:38.585752 | orchestrator | PING 192.168.112.195 (192.168.112.195) 56(84) bytes of data. 2025-11-01 13:53:38.585789 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=1 ttl=63 time=12.3 ms 2025-11-01 13:53:39.578546 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=2 ttl=63 time=3.14 ms 2025-11-01 13:53:40.579452 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=3 ttl=63 time=2.56 ms 2025-11-01 13:53:40.579535 | orchestrator | 2025-11-01 13:53:40.579548 | orchestrator | --- 192.168.112.195 ping statistics --- 2025-11-01 13:53:40.579560 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 13:53:40.579571 | orchestrator | rtt min/avg/max/mdev = 2.558/6.001/12.307/4.465 ms 2025-11-01 13:53:40.579594 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 13:53:40.579607 | orchestrator | + ping -c3 192.168.112.178 2025-11-01 13:53:40.590493 | orchestrator | PING 192.168.112.178 (192.168.112.178) 56(84) bytes of data. 2025-11-01 13:53:40.590518 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=1 ttl=63 time=7.83 ms 2025-11-01 13:53:41.587499 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=2 ttl=63 time=3.38 ms 2025-11-01 13:53:42.588437 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=3 ttl=63 time=1.89 ms 2025-11-01 13:53:42.588522 | orchestrator | 2025-11-01 13:53:42.588535 | orchestrator | --- 192.168.112.178 ping statistics --- 2025-11-01 13:53:42.588547 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 13:53:42.588559 | orchestrator | rtt min/avg/max/mdev = 1.893/4.369/7.834/2.524 ms 2025-11-01 13:53:42.588571 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 13:53:42.588582 | orchestrator | + ping -c3 192.168.112.191 2025-11-01 13:53:42.602913 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2025-11-01 13:53:42.602941 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=10.4 ms 2025-11-01 13:53:43.597328 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=2.94 ms 2025-11-01 13:53:44.598549 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=2.12 ms 2025-11-01 13:53:44.598658 | orchestrator | 2025-11-01 13:53:44.598673 | orchestrator | --- 192.168.112.191 ping statistics --- 2025-11-01 13:53:44.599259 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 13:53:44.599281 | orchestrator | rtt min/avg/max/mdev = 2.117/5.149/10.392/3.722 ms 2025-11-01 13:53:44.599294 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-11-01 13:53:48.034852 | orchestrator | 2025-11-01 13:53:48 | INFO  | Live migrating server 526cbb81-0636-430e-8995-1d1af38f3cb2 2025-11-01 13:53:59.102006 | orchestrator | 2025-11-01 13:53:59 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 13:54:01.439805 | orchestrator | 2025-11-01 13:54:01 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 13:54:03.725591 | orchestrator | 2025-11-01 13:54:03 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 13:54:06.126343 | orchestrator | 2025-11-01 13:54:06 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 13:54:08.480656 | orchestrator | 2025-11-01 13:54:08 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 13:54:10.761744 | orchestrator | 2025-11-01 13:54:10 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 13:54:13.049189 | orchestrator | 2025-11-01 13:54:13 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 13:54:15.321177 | orchestrator | 2025-11-01 13:54:15 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 13:54:17.636828 | orchestrator | 2025-11-01 13:54:17 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 13:54:19.954193 | orchestrator | 2025-11-01 13:54:19 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 13:54:22.366170 | orchestrator | 2025-11-01 13:54:22 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) completed with status ACTIVE 2025-11-01 13:54:22.795758 | orchestrator | + compute_list 2025-11-01 13:54:22.795846 | orchestrator | + osism manage compute list testbed-node-3 2025-11-01 13:54:26.375471 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 13:54:26.375566 | orchestrator | | ID | Name | Status | 2025-11-01 13:54:26.375579 | orchestrator | |--------------------------------------+--------+----------| 2025-11-01 13:54:26.375591 | orchestrator | | fda44fa8-39d6-4af4-bfff-5d701aca2409 | test-4 | ACTIVE | 2025-11-01 13:54:26.375601 | orchestrator | | 9f23c948-7478-46f1-803b-e4f684818cac | test-3 | ACTIVE | 2025-11-01 13:54:26.375612 | orchestrator | | 08792d1f-7ae1-4479-827e-6a51e5fce1f3 | test-2 | ACTIVE | 2025-11-01 13:54:26.375623 | orchestrator | | 72ff0dc5-6971-46cf-8fab-52070155c371 | test-1 | ACTIVE | 2025-11-01 13:54:26.375634 | orchestrator | | 526cbb81-0636-430e-8995-1d1af38f3cb2 | test | ACTIVE | 2025-11-01 13:54:26.375644 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 13:54:26.771355 | orchestrator | + osism manage compute list testbed-node-4 2025-11-01 13:54:29.794448 | orchestrator | +------+--------+----------+ 2025-11-01 13:54:29.794536 | orchestrator | | ID | Name | Status | 2025-11-01 13:54:29.794549 | orchestrator | |------+--------+----------| 2025-11-01 13:54:29.794561 | orchestrator | +------+--------+----------+ 2025-11-01 13:54:30.168141 | orchestrator | + osism manage compute list testbed-node-5 2025-11-01 13:54:33.145183 | orchestrator | +------+--------+----------+ 2025-11-01 13:54:33.145279 | orchestrator | | ID | Name | Status | 2025-11-01 13:54:33.145293 | orchestrator | |------+--------+----------| 2025-11-01 13:54:33.145304 | orchestrator | +------+--------+----------+ 2025-11-01 13:54:33.541212 | orchestrator | + server_ping 2025-11-01 13:54:33.541918 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-11-01 13:54:33.542084 | orchestrator | ++ tr -d '\r' 2025-11-01 13:54:36.874354 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 13:54:36.874480 | orchestrator | + ping -c3 192.168.112.155 2025-11-01 13:54:36.884754 | orchestrator | PING 192.168.112.155 (192.168.112.155) 56(84) bytes of data. 2025-11-01 13:54:36.884806 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=1 ttl=63 time=7.52 ms 2025-11-01 13:54:37.882204 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=2 ttl=63 time=2.83 ms 2025-11-01 13:54:38.883419 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=3 ttl=63 time=2.39 ms 2025-11-01 13:54:38.883524 | orchestrator | 2025-11-01 13:54:38.883542 | orchestrator | --- 192.168.112.155 ping statistics --- 2025-11-01 13:54:38.883555 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-11-01 13:54:38.883566 | orchestrator | rtt min/avg/max/mdev = 2.389/4.246/7.523/2.323 ms 2025-11-01 13:54:38.883802 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 13:54:38.883824 | orchestrator | + ping -c3 192.168.112.112 2025-11-01 13:54:38.896146 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2025-11-01 13:54:38.896171 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=7.58 ms 2025-11-01 13:54:39.893195 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=3.24 ms 2025-11-01 13:54:40.893755 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=2.16 ms 2025-11-01 13:54:40.893829 | orchestrator | 2025-11-01 13:54:40.893843 | orchestrator | --- 192.168.112.112 ping statistics --- 2025-11-01 13:54:40.893855 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 13:54:40.893867 | orchestrator | rtt min/avg/max/mdev = 2.163/4.327/7.578/2.340 ms 2025-11-01 13:54:40.895106 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 13:54:40.895133 | orchestrator | + ping -c3 192.168.112.195 2025-11-01 13:54:40.910547 | orchestrator | PING 192.168.112.195 (192.168.112.195) 56(84) bytes of data. 2025-11-01 13:54:40.910573 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=1 ttl=63 time=10.0 ms 2025-11-01 13:54:41.904712 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=2 ttl=63 time=2.82 ms 2025-11-01 13:54:42.905500 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=3 ttl=63 time=2.10 ms 2025-11-01 13:54:42.905610 | orchestrator | 2025-11-01 13:54:42.905616 | orchestrator | --- 192.168.112.195 ping statistics --- 2025-11-01 13:54:42.905622 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-11-01 13:54:42.905627 | orchestrator | rtt min/avg/max/mdev = 2.099/4.972/10.002/3.568 ms 2025-11-01 13:54:42.905640 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 13:54:42.905797 | orchestrator | + ping -c3 192.168.112.178 2025-11-01 13:54:42.916551 | orchestrator | PING 192.168.112.178 (192.168.112.178) 56(84) bytes of data. 2025-11-01 13:54:42.916562 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=1 ttl=63 time=7.74 ms 2025-11-01 13:54:43.913639 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=2 ttl=63 time=2.59 ms 2025-11-01 13:54:44.915527 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=3 ttl=63 time=1.96 ms 2025-11-01 13:54:44.915625 | orchestrator | 2025-11-01 13:54:44.915640 | orchestrator | --- 192.168.112.178 ping statistics --- 2025-11-01 13:54:44.915652 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-11-01 13:54:44.915664 | orchestrator | rtt min/avg/max/mdev = 1.957/4.095/7.741/2.590 ms 2025-11-01 13:54:44.915676 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 13:54:44.915687 | orchestrator | + ping -c3 192.168.112.191 2025-11-01 13:54:44.925761 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2025-11-01 13:54:44.925810 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=6.76 ms 2025-11-01 13:54:45.924035 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=2.61 ms 2025-11-01 13:54:46.925693 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=2.10 ms 2025-11-01 13:54:46.925771 | orchestrator | 2025-11-01 13:54:46.925784 | orchestrator | --- 192.168.112.191 ping statistics --- 2025-11-01 13:54:46.925798 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-11-01 13:54:46.925809 | orchestrator | rtt min/avg/max/mdev = 2.104/3.824/6.762/2.087 ms 2025-11-01 13:54:46.927044 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-11-01 13:54:50.588610 | orchestrator | 2025-11-01 13:54:50 | INFO  | Live migrating server fda44fa8-39d6-4af4-bfff-5d701aca2409 2025-11-01 13:55:03.262013 | orchestrator | 2025-11-01 13:55:03 | INFO  | Live migration of fda44fa8-39d6-4af4-bfff-5d701aca2409 (test-4) is still in progress 2025-11-01 13:55:05.610485 | orchestrator | 2025-11-01 13:55:05 | INFO  | Live migration of fda44fa8-39d6-4af4-bfff-5d701aca2409 (test-4) is still in progress 2025-11-01 13:55:07.994756 | orchestrator | 2025-11-01 13:55:07 | INFO  | Live migration of fda44fa8-39d6-4af4-bfff-5d701aca2409 (test-4) is still in progress 2025-11-01 13:55:10.347299 | orchestrator | 2025-11-01 13:55:10 | INFO  | Live migration of fda44fa8-39d6-4af4-bfff-5d701aca2409 (test-4) is still in progress 2025-11-01 13:55:12.716414 | orchestrator | 2025-11-01 13:55:12 | INFO  | Live migration of fda44fa8-39d6-4af4-bfff-5d701aca2409 (test-4) is still in progress 2025-11-01 13:55:15.032467 | orchestrator | 2025-11-01 13:55:15 | INFO  | Live migration of fda44fa8-39d6-4af4-bfff-5d701aca2409 (test-4) is still in progress 2025-11-01 13:55:17.381697 | orchestrator | 2025-11-01 13:55:17 | INFO  | Live migration of fda44fa8-39d6-4af4-bfff-5d701aca2409 (test-4) is still in progress 2025-11-01 13:55:19.766919 | orchestrator | 2025-11-01 13:55:19 | INFO  | Live migration of fda44fa8-39d6-4af4-bfff-5d701aca2409 (test-4) is still in progress 2025-11-01 13:55:22.071809 | orchestrator | 2025-11-01 13:55:22 | INFO  | Live migration of fda44fa8-39d6-4af4-bfff-5d701aca2409 (test-4) completed with status ACTIVE 2025-11-01 13:55:22.071900 | orchestrator | 2025-11-01 13:55:22 | INFO  | Live migrating server 9f23c948-7478-46f1-803b-e4f684818cac 2025-11-01 13:55:34.625094 | orchestrator | 2025-11-01 13:55:34 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:55:37.211313 | orchestrator | 2025-11-01 13:55:37 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:55:39.587638 | orchestrator | 2025-11-01 13:55:39 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:55:41.993858 | orchestrator | 2025-11-01 13:55:41 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:55:44.282489 | orchestrator | 2025-11-01 13:55:44 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:55:46.584322 | orchestrator | 2025-11-01 13:55:46 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:55:48.878091 | orchestrator | 2025-11-01 13:55:48 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:55:51.240013 | orchestrator | 2025-11-01 13:55:51 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:55:53.661873 | orchestrator | 2025-11-01 13:55:53 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) completed with status ACTIVE 2025-11-01 13:55:53.661976 | orchestrator | 2025-11-01 13:55:53 | INFO  | Live migrating server 08792d1f-7ae1-4479-827e-6a51e5fce1f3 2025-11-01 13:56:06.057820 | orchestrator | 2025-11-01 13:56:06 | INFO  | Live migration of 08792d1f-7ae1-4479-827e-6a51e5fce1f3 (test-2) is still in progress 2025-11-01 13:56:08.454612 | orchestrator | 2025-11-01 13:56:08 | INFO  | Live migration of 08792d1f-7ae1-4479-827e-6a51e5fce1f3 (test-2) is still in progress 2025-11-01 13:56:10.831609 | orchestrator | 2025-11-01 13:56:10 | INFO  | Live migration of 08792d1f-7ae1-4479-827e-6a51e5fce1f3 (test-2) is still in progress 2025-11-01 13:56:13.126357 | orchestrator | 2025-11-01 13:56:13 | INFO  | Live migration of 08792d1f-7ae1-4479-827e-6a51e5fce1f3 (test-2) is still in progress 2025-11-01 13:56:15.400499 | orchestrator | 2025-11-01 13:56:15 | INFO  | Live migration of 08792d1f-7ae1-4479-827e-6a51e5fce1f3 (test-2) is still in progress 2025-11-01 13:56:17.713935 | orchestrator | 2025-11-01 13:56:17 | INFO  | Live migration of 08792d1f-7ae1-4479-827e-6a51e5fce1f3 (test-2) is still in progress 2025-11-01 13:56:20.038868 | orchestrator | 2025-11-01 13:56:20 | INFO  | Live migration of 08792d1f-7ae1-4479-827e-6a51e5fce1f3 (test-2) is still in progress 2025-11-01 13:56:22.511431 | orchestrator | 2025-11-01 13:56:22 | INFO  | Live migration of 08792d1f-7ae1-4479-827e-6a51e5fce1f3 (test-2) is still in progress 2025-11-01 13:56:24.838269 | orchestrator | 2025-11-01 13:56:24 | INFO  | Live migration of 08792d1f-7ae1-4479-827e-6a51e5fce1f3 (test-2) completed with status ACTIVE 2025-11-01 13:56:24.838365 | orchestrator | 2025-11-01 13:56:24 | INFO  | Live migrating server 72ff0dc5-6971-46cf-8fab-52070155c371 2025-11-01 13:56:34.570117 | orchestrator | 2025-11-01 13:56:34 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:56:36.942868 | orchestrator | 2025-11-01 13:56:36 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:56:39.300886 | orchestrator | 2025-11-01 13:56:39 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:56:41.662220 | orchestrator | 2025-11-01 13:56:41 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:56:44.182221 | orchestrator | 2025-11-01 13:56:44 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:56:46.530893 | orchestrator | 2025-11-01 13:56:46 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:56:48.905735 | orchestrator | 2025-11-01 13:56:48 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:56:51.316801 | orchestrator | 2025-11-01 13:56:51 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:56:53.649835 | orchestrator | 2025-11-01 13:56:53 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) completed with status ACTIVE 2025-11-01 13:56:53.649929 | orchestrator | 2025-11-01 13:56:53 | INFO  | Live migrating server 526cbb81-0636-430e-8995-1d1af38f3cb2 2025-11-01 13:57:05.411007 | orchestrator | 2025-11-01 13:57:05 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 13:57:07.774652 | orchestrator | 2025-11-01 13:57:07 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 13:57:10.136851 | orchestrator | 2025-11-01 13:57:10 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 13:57:12.510123 | orchestrator | 2025-11-01 13:57:12 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 13:57:14.827569 | orchestrator | 2025-11-01 13:57:14 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 13:57:17.097971 | orchestrator | 2025-11-01 13:57:17 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 13:57:19.566909 | orchestrator | 2025-11-01 13:57:19 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 13:57:21.832208 | orchestrator | 2025-11-01 13:57:21 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 13:57:24.149581 | orchestrator | 2025-11-01 13:57:24 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 13:57:26.491517 | orchestrator | 2025-11-01 13:57:26 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 13:57:28.983347 | orchestrator | 2025-11-01 13:57:28 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) completed with status ACTIVE 2025-11-01 13:57:29.623971 | orchestrator | + compute_list 2025-11-01 13:57:29.624049 | orchestrator | + osism manage compute list testbed-node-3 2025-11-01 13:57:32.954863 | orchestrator | +------+--------+----------+ 2025-11-01 13:57:32.954966 | orchestrator | | ID | Name | Status | 2025-11-01 13:57:32.954980 | orchestrator | |------+--------+----------| 2025-11-01 13:57:32.954991 | orchestrator | +------+--------+----------+ 2025-11-01 13:57:33.385462 | orchestrator | + osism manage compute list testbed-node-4 2025-11-01 13:57:37.087251 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 13:57:37.087341 | orchestrator | | ID | Name | Status | 2025-11-01 13:57:37.087354 | orchestrator | |--------------------------------------+--------+----------| 2025-11-01 13:57:37.087365 | orchestrator | | fda44fa8-39d6-4af4-bfff-5d701aca2409 | test-4 | ACTIVE | 2025-11-01 13:57:37.087438 | orchestrator | | 9f23c948-7478-46f1-803b-e4f684818cac | test-3 | ACTIVE | 2025-11-01 13:57:37.087476 | orchestrator | | 08792d1f-7ae1-4479-827e-6a51e5fce1f3 | test-2 | ACTIVE | 2025-11-01 13:57:37.087486 | orchestrator | | 72ff0dc5-6971-46cf-8fab-52070155c371 | test-1 | ACTIVE | 2025-11-01 13:57:37.087496 | orchestrator | | 526cbb81-0636-430e-8995-1d1af38f3cb2 | test | ACTIVE | 2025-11-01 13:57:37.087506 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 13:57:37.497254 | orchestrator | + osism manage compute list testbed-node-5 2025-11-01 13:57:40.588609 | orchestrator | +------+--------+----------+ 2025-11-01 13:57:40.588709 | orchestrator | | ID | Name | Status | 2025-11-01 13:57:40.588724 | orchestrator | |------+--------+----------| 2025-11-01 13:57:40.588736 | orchestrator | +------+--------+----------+ 2025-11-01 13:57:40.997065 | orchestrator | + server_ping 2025-11-01 13:57:40.998285 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-11-01 13:57:40.998509 | orchestrator | ++ tr -d '\r' 2025-11-01 13:57:44.560781 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 13:57:44.560877 | orchestrator | + ping -c3 192.168.112.155 2025-11-01 13:57:44.571939 | orchestrator | PING 192.168.112.155 (192.168.112.155) 56(84) bytes of data. 2025-11-01 13:57:44.571967 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=1 ttl=63 time=8.16 ms 2025-11-01 13:57:45.568256 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=2 ttl=63 time=2.81 ms 2025-11-01 13:57:46.570507 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=3 ttl=63 time=1.94 ms 2025-11-01 13:57:46.570596 | orchestrator | 2025-11-01 13:57:46.570610 | orchestrator | --- 192.168.112.155 ping statistics --- 2025-11-01 13:57:46.570621 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-11-01 13:57:46.570631 | orchestrator | rtt min/avg/max/mdev = 1.938/4.303/8.162/2.751 ms 2025-11-01 13:57:46.571032 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 13:57:46.571053 | orchestrator | + ping -c3 192.168.112.112 2025-11-01 13:57:46.582964 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2025-11-01 13:57:46.582987 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=9.54 ms 2025-11-01 13:57:47.578648 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=3.01 ms 2025-11-01 13:57:48.578775 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.86 ms 2025-11-01 13:57:48.578866 | orchestrator | 2025-11-01 13:57:48.578881 | orchestrator | --- 192.168.112.112 ping statistics --- 2025-11-01 13:57:48.578895 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 13:57:48.578906 | orchestrator | rtt min/avg/max/mdev = 1.862/4.804/9.540/3.381 ms 2025-11-01 13:57:48.579144 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 13:57:48.579176 | orchestrator | + ping -c3 192.168.112.195 2025-11-01 13:57:48.589274 | orchestrator | PING 192.168.112.195 (192.168.112.195) 56(84) bytes of data. 2025-11-01 13:57:48.589321 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=1 ttl=63 time=5.29 ms 2025-11-01 13:57:49.588192 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=2 ttl=63 time=2.39 ms 2025-11-01 13:57:50.589689 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=3 ttl=63 time=1.94 ms 2025-11-01 13:57:50.589823 | orchestrator | 2025-11-01 13:57:50.589833 | orchestrator | --- 192.168.112.195 ping statistics --- 2025-11-01 13:57:50.589840 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 13:57:50.589846 | orchestrator | rtt min/avg/max/mdev = 1.936/3.203/5.287/1.485 ms 2025-11-01 13:57:50.589859 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 13:57:50.589865 | orchestrator | + ping -c3 192.168.112.178 2025-11-01 13:57:50.600825 | orchestrator | PING 192.168.112.178 (192.168.112.178) 56(84) bytes of data. 2025-11-01 13:57:50.600837 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=1 ttl=63 time=6.10 ms 2025-11-01 13:57:51.599203 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=2 ttl=63 time=2.77 ms 2025-11-01 13:57:52.600546 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=3 ttl=63 time=1.71 ms 2025-11-01 13:57:52.600649 | orchestrator | 2025-11-01 13:57:52.600666 | orchestrator | --- 192.168.112.178 ping statistics --- 2025-11-01 13:57:52.600707 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-11-01 13:57:52.600719 | orchestrator | rtt min/avg/max/mdev = 1.714/3.528/6.100/1.868 ms 2025-11-01 13:57:52.601116 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 13:57:52.601140 | orchestrator | + ping -c3 192.168.112.191 2025-11-01 13:57:52.613798 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2025-11-01 13:57:52.613826 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=8.43 ms 2025-11-01 13:57:53.609665 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=2.63 ms 2025-11-01 13:57:54.611266 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=2.08 ms 2025-11-01 13:57:54.611361 | orchestrator | 2025-11-01 13:57:54.611421 | orchestrator | --- 192.168.112.191 ping statistics --- 2025-11-01 13:57:54.611436 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 13:57:54.611448 | orchestrator | rtt min/avg/max/mdev = 2.082/4.380/8.428/2.870 ms 2025-11-01 13:57:54.611851 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-11-01 13:57:58.254345 | orchestrator | 2025-11-01 13:57:58 | INFO  | Live migrating server fda44fa8-39d6-4af4-bfff-5d701aca2409 2025-11-01 13:58:08.534314 | orchestrator | 2025-11-01 13:58:08 | INFO  | Live migration of fda44fa8-39d6-4af4-bfff-5d701aca2409 (test-4) is still in progress 2025-11-01 13:58:10.873323 | orchestrator | 2025-11-01 13:58:10 | INFO  | Live migration of fda44fa8-39d6-4af4-bfff-5d701aca2409 (test-4) is still in progress 2025-11-01 13:58:13.241459 | orchestrator | 2025-11-01 13:58:13 | INFO  | Live migration of fda44fa8-39d6-4af4-bfff-5d701aca2409 (test-4) is still in progress 2025-11-01 13:58:15.491992 | orchestrator | 2025-11-01 13:58:15 | INFO  | Live migration of fda44fa8-39d6-4af4-bfff-5d701aca2409 (test-4) is still in progress 2025-11-01 13:58:17.769198 | orchestrator | 2025-11-01 13:58:17 | INFO  | Live migration of fda44fa8-39d6-4af4-bfff-5d701aca2409 (test-4) is still in progress 2025-11-01 13:58:20.060718 | orchestrator | 2025-11-01 13:58:20 | INFO  | Live migration of fda44fa8-39d6-4af4-bfff-5d701aca2409 (test-4) is still in progress 2025-11-01 13:58:22.328485 | orchestrator | 2025-11-01 13:58:22 | INFO  | Live migration of fda44fa8-39d6-4af4-bfff-5d701aca2409 (test-4) is still in progress 2025-11-01 13:58:24.645539 | orchestrator | 2025-11-01 13:58:24 | INFO  | Live migration of fda44fa8-39d6-4af4-bfff-5d701aca2409 (test-4) is still in progress 2025-11-01 13:58:26.893681 | orchestrator | 2025-11-01 13:58:26 | INFO  | Live migration of fda44fa8-39d6-4af4-bfff-5d701aca2409 (test-4) completed with status ACTIVE 2025-11-01 13:58:26.893761 | orchestrator | 2025-11-01 13:58:26 | INFO  | Live migrating server 9f23c948-7478-46f1-803b-e4f684818cac 2025-11-01 13:58:36.621499 | orchestrator | 2025-11-01 13:58:36 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:58:38.987708 | orchestrator | 2025-11-01 13:58:38 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:58:41.322740 | orchestrator | 2025-11-01 13:58:41 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:58:43.684866 | orchestrator | 2025-11-01 13:58:43 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:58:46.038721 | orchestrator | 2025-11-01 13:58:46 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:58:48.323907 | orchestrator | 2025-11-01 13:58:48 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:58:50.630784 | orchestrator | 2025-11-01 13:58:50 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:58:52.894072 | orchestrator | 2025-11-01 13:58:52 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:58:55.217738 | orchestrator | 2025-11-01 13:58:55 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) is still in progress 2025-11-01 13:58:57.531760 | orchestrator | 2025-11-01 13:58:57 | INFO  | Live migration of 9f23c948-7478-46f1-803b-e4f684818cac (test-3) completed with status ACTIVE 2025-11-01 13:58:57.531844 | orchestrator | 2025-11-01 13:58:57 | INFO  | Live migrating server 08792d1f-7ae1-4479-827e-6a51e5fce1f3 2025-11-01 13:59:07.361863 | orchestrator | 2025-11-01 13:59:07 | INFO  | Live migration of 08792d1f-7ae1-4479-827e-6a51e5fce1f3 (test-2) is still in progress 2025-11-01 13:59:09.742507 | orchestrator | 2025-11-01 13:59:09 | INFO  | Live migration of 08792d1f-7ae1-4479-827e-6a51e5fce1f3 (test-2) is still in progress 2025-11-01 13:59:12.087352 | orchestrator | 2025-11-01 13:59:12 | INFO  | Live migration of 08792d1f-7ae1-4479-827e-6a51e5fce1f3 (test-2) is still in progress 2025-11-01 13:59:14.462298 | orchestrator | 2025-11-01 13:59:14 | INFO  | Live migration of 08792d1f-7ae1-4479-827e-6a51e5fce1f3 (test-2) is still in progress 2025-11-01 13:59:16.854564 | orchestrator | 2025-11-01 13:59:16 | INFO  | Live migration of 08792d1f-7ae1-4479-827e-6a51e5fce1f3 (test-2) is still in progress 2025-11-01 13:59:19.188657 | orchestrator | 2025-11-01 13:59:19 | INFO  | Live migration of 08792d1f-7ae1-4479-827e-6a51e5fce1f3 (test-2) is still in progress 2025-11-01 13:59:21.557728 | orchestrator | 2025-11-01 13:59:21 | INFO  | Live migration of 08792d1f-7ae1-4479-827e-6a51e5fce1f3 (test-2) is still in progress 2025-11-01 13:59:23.942666 | orchestrator | 2025-11-01 13:59:23 | INFO  | Live migration of 08792d1f-7ae1-4479-827e-6a51e5fce1f3 (test-2) is still in progress 2025-11-01 13:59:26.293696 | orchestrator | 2025-11-01 13:59:26 | INFO  | Live migration of 08792d1f-7ae1-4479-827e-6a51e5fce1f3 (test-2) completed with status ACTIVE 2025-11-01 13:59:26.293793 | orchestrator | 2025-11-01 13:59:26 | INFO  | Live migrating server 72ff0dc5-6971-46cf-8fab-52070155c371 2025-11-01 13:59:36.723003 | orchestrator | 2025-11-01 13:59:36 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:59:39.063335 | orchestrator | 2025-11-01 13:59:39 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:59:41.437064 | orchestrator | 2025-11-01 13:59:41 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:59:43.782575 | orchestrator | 2025-11-01 13:59:43 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:59:46.080029 | orchestrator | 2025-11-01 13:59:46 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:59:48.363741 | orchestrator | 2025-11-01 13:59:48 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:59:50.653503 | orchestrator | 2025-11-01 13:59:50 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:59:52.932850 | orchestrator | 2025-11-01 13:59:52 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:59:55.458238 | orchestrator | 2025-11-01 13:59:55 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) is still in progress 2025-11-01 13:59:57.843086 | orchestrator | 2025-11-01 13:59:57 | INFO  | Live migration of 72ff0dc5-6971-46cf-8fab-52070155c371 (test-1) completed with status ACTIVE 2025-11-01 13:59:57.843206 | orchestrator | 2025-11-01 13:59:57 | INFO  | Live migrating server 526cbb81-0636-430e-8995-1d1af38f3cb2 2025-11-01 14:00:08.201662 | orchestrator | 2025-11-01 14:00:08 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 14:00:10.544203 | orchestrator | 2025-11-01 14:00:10 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 14:00:12.900045 | orchestrator | 2025-11-01 14:00:12 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 14:00:15.278134 | orchestrator | 2025-11-01 14:00:15 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 14:00:17.608529 | orchestrator | 2025-11-01 14:00:17 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 14:00:19.925592 | orchestrator | 2025-11-01 14:00:19 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 14:00:22.284954 | orchestrator | 2025-11-01 14:00:22 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 14:00:24.554482 | orchestrator | 2025-11-01 14:00:24 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 14:00:26.873870 | orchestrator | 2025-11-01 14:00:26 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 14:00:29.269176 | orchestrator | 2025-11-01 14:00:29 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) is still in progress 2025-11-01 14:00:31.574302 | orchestrator | 2025-11-01 14:00:31 | INFO  | Live migration of 526cbb81-0636-430e-8995-1d1af38f3cb2 (test) completed with status ACTIVE 2025-11-01 14:00:31.965718 | orchestrator | + compute_list 2025-11-01 14:00:31.965773 | orchestrator | + osism manage compute list testbed-node-3 2025-11-01 14:00:35.112941 | orchestrator | +------+--------+----------+ 2025-11-01 14:00:35.113041 | orchestrator | | ID | Name | Status | 2025-11-01 14:00:35.113056 | orchestrator | |------+--------+----------| 2025-11-01 14:00:35.113068 | orchestrator | +------+--------+----------+ 2025-11-01 14:00:35.519697 | orchestrator | + osism manage compute list testbed-node-4 2025-11-01 14:00:38.609491 | orchestrator | +------+--------+----------+ 2025-11-01 14:00:38.609595 | orchestrator | | ID | Name | Status | 2025-11-01 14:00:38.609610 | orchestrator | |------+--------+----------| 2025-11-01 14:00:38.609621 | orchestrator | +------+--------+----------+ 2025-11-01 14:00:39.028959 | orchestrator | + osism manage compute list testbed-node-5 2025-11-01 14:00:42.629602 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 14:00:42.629704 | orchestrator | | ID | Name | Status | 2025-11-01 14:00:42.629719 | orchestrator | |--------------------------------------+--------+----------| 2025-11-01 14:00:42.629731 | orchestrator | | fda44fa8-39d6-4af4-bfff-5d701aca2409 | test-4 | ACTIVE | 2025-11-01 14:00:42.629741 | orchestrator | | 9f23c948-7478-46f1-803b-e4f684818cac | test-3 | ACTIVE | 2025-11-01 14:00:42.629753 | orchestrator | | 08792d1f-7ae1-4479-827e-6a51e5fce1f3 | test-2 | ACTIVE | 2025-11-01 14:00:42.629764 | orchestrator | | 72ff0dc5-6971-46cf-8fab-52070155c371 | test-1 | ACTIVE | 2025-11-01 14:00:42.629775 | orchestrator | | 526cbb81-0636-430e-8995-1d1af38f3cb2 | test | ACTIVE | 2025-11-01 14:00:42.629786 | orchestrator | +--------------------------------------+--------+----------+ 2025-11-01 14:00:43.027728 | orchestrator | + server_ping 2025-11-01 14:00:43.028912 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-11-01 14:00:43.030110 | orchestrator | ++ tr -d '\r' 2025-11-01 14:00:46.132052 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 14:00:46.132187 | orchestrator | + ping -c3 192.168.112.155 2025-11-01 14:00:46.145707 | orchestrator | PING 192.168.112.155 (192.168.112.155) 56(84) bytes of data. 2025-11-01 14:00:46.145777 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=1 ttl=63 time=11.2 ms 2025-11-01 14:00:47.138743 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=2 ttl=63 time=2.71 ms 2025-11-01 14:00:48.140773 | orchestrator | 64 bytes from 192.168.112.155: icmp_seq=3 ttl=63 time=2.30 ms 2025-11-01 14:00:48.140842 | orchestrator | 2025-11-01 14:00:48.140851 | orchestrator | --- 192.168.112.155 ping statistics --- 2025-11-01 14:00:48.140858 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-11-01 14:00:48.140865 | orchestrator | rtt min/avg/max/mdev = 2.298/5.404/11.206/4.105 ms 2025-11-01 14:00:48.140872 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 14:00:48.140879 | orchestrator | + ping -c3 192.168.112.112 2025-11-01 14:00:48.153018 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2025-11-01 14:00:48.153035 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=8.03 ms 2025-11-01 14:00:49.149630 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=2.85 ms 2025-11-01 14:00:50.150562 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.74 ms 2025-11-01 14:00:50.150655 | orchestrator | 2025-11-01 14:00:50.150671 | orchestrator | --- 192.168.112.112 ping statistics --- 2025-11-01 14:00:50.150683 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 14:00:50.150695 | orchestrator | rtt min/avg/max/mdev = 1.738/4.203/8.026/2.740 ms 2025-11-01 14:00:50.150706 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 14:00:50.150718 | orchestrator | + ping -c3 192.168.112.195 2025-11-01 14:00:50.162769 | orchestrator | PING 192.168.112.195 (192.168.112.195) 56(84) bytes of data. 2025-11-01 14:00:50.162800 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=1 ttl=63 time=8.26 ms 2025-11-01 14:00:51.158852 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=2 ttl=63 time=2.72 ms 2025-11-01 14:00:52.160663 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=3 ttl=63 time=2.21 ms 2025-11-01 14:00:52.160748 | orchestrator | 2025-11-01 14:00:52.160763 | orchestrator | --- 192.168.112.195 ping statistics --- 2025-11-01 14:00:52.160774 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-11-01 14:00:52.160786 | orchestrator | rtt min/avg/max/mdev = 2.212/4.396/8.259/2.739 ms 2025-11-01 14:00:52.162138 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 14:00:52.162166 | orchestrator | + ping -c3 192.168.112.178 2025-11-01 14:00:52.173923 | orchestrator | PING 192.168.112.178 (192.168.112.178) 56(84) bytes of data. 2025-11-01 14:00:52.173950 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=1 ttl=63 time=7.60 ms 2025-11-01 14:00:53.170682 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=2 ttl=63 time=2.95 ms 2025-11-01 14:00:54.171001 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=3 ttl=63 time=1.91 ms 2025-11-01 14:00:54.171327 | orchestrator | 2025-11-01 14:00:54.171356 | orchestrator | --- 192.168.112.178 ping statistics --- 2025-11-01 14:00:54.171368 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 14:00:54.171380 | orchestrator | rtt min/avg/max/mdev = 1.907/4.153/7.602/2.475 ms 2025-11-01 14:00:54.172158 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-11-01 14:00:54.172186 | orchestrator | + ping -c3 192.168.112.191 2025-11-01 14:00:54.184563 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2025-11-01 14:00:54.184590 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=6.54 ms 2025-11-01 14:00:55.183295 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=3.05 ms 2025-11-01 14:00:56.184184 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=2.15 ms 2025-11-01 14:00:56.184258 | orchestrator | 2025-11-01 14:00:56.184266 | orchestrator | --- 192.168.112.191 ping statistics --- 2025-11-01 14:00:56.184274 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-11-01 14:00:56.184281 | orchestrator | rtt min/avg/max/mdev = 2.154/3.915/6.541/1.892 ms 2025-11-01 14:00:56.385755 | orchestrator | ok: Runtime: 0:22:34.223545 2025-11-01 14:00:56.448306 | 2025-11-01 14:00:56.448461 | TASK [Run tempest] 2025-11-01 14:00:56.985694 | orchestrator | skipping: Conditional result was False 2025-11-01 14:00:57.002895 | 2025-11-01 14:00:57.003044 | TASK [Check prometheus alert status] 2025-11-01 14:00:57.536870 | orchestrator | skipping: Conditional result was False 2025-11-01 14:00:57.539967 | 2025-11-01 14:00:57.540171 | PLAY RECAP 2025-11-01 14:00:57.540325 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-11-01 14:00:57.540393 | 2025-11-01 14:00:57.759164 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-11-01 14:00:57.761624 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-11-01 14:00:58.500244 | 2025-11-01 14:00:58.500393 | PLAY [Post output play] 2025-11-01 14:00:58.517109 | 2025-11-01 14:00:58.517239 | LOOP [stage-output : Register sources] 2025-11-01 14:00:58.586621 | 2025-11-01 14:00:58.586908 | TASK [stage-output : Check sudo] 2025-11-01 14:00:59.420091 | orchestrator | sudo: a password is required 2025-11-01 14:00:59.625437 | orchestrator | ok: Runtime: 0:00:00.013119 2025-11-01 14:00:59.640258 | 2025-11-01 14:00:59.640414 | LOOP [stage-output : Set source and destination for files and folders] 2025-11-01 14:00:59.676526 | 2025-11-01 14:00:59.676769 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-11-01 14:00:59.745040 | orchestrator | ok 2025-11-01 14:00:59.754356 | 2025-11-01 14:00:59.754487 | LOOP [stage-output : Ensure target folders exist] 2025-11-01 14:01:00.172643 | orchestrator | ok: "docs" 2025-11-01 14:01:00.172977 | 2025-11-01 14:01:00.395761 | orchestrator | ok: "artifacts" 2025-11-01 14:01:00.636014 | orchestrator | ok: "logs" 2025-11-01 14:01:00.648131 | 2025-11-01 14:01:00.648259 | LOOP [stage-output : Copy files and folders to staging folder] 2025-11-01 14:01:00.681555 | 2025-11-01 14:01:00.681764 | TASK [stage-output : Make all log files readable] 2025-11-01 14:01:00.954335 | orchestrator | ok 2025-11-01 14:01:00.962712 | 2025-11-01 14:01:00.962860 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-11-01 14:01:01.007644 | orchestrator | skipping: Conditional result was False 2025-11-01 14:01:01.026151 | 2025-11-01 14:01:01.026291 | TASK [stage-output : Discover log files for compression] 2025-11-01 14:01:01.050741 | orchestrator | skipping: Conditional result was False 2025-11-01 14:01:01.062934 | 2025-11-01 14:01:01.063123 | LOOP [stage-output : Archive everything from logs] 2025-11-01 14:01:01.111766 | 2025-11-01 14:01:01.111962 | PLAY [Post cleanup play] 2025-11-01 14:01:01.121248 | 2025-11-01 14:01:01.121345 | TASK [Set cloud fact (Zuul deployment)] 2025-11-01 14:01:01.172964 | orchestrator | ok 2025-11-01 14:01:01.182377 | 2025-11-01 14:01:01.182483 | TASK [Set cloud fact (local deployment)] 2025-11-01 14:01:01.205531 | orchestrator | skipping: Conditional result was False 2025-11-01 14:01:01.214687 | 2025-11-01 14:01:01.214801 | TASK [Clean the cloud environment] 2025-11-01 14:01:02.715816 | orchestrator | 2025-11-01 14:01:02 - clean up servers 2025-11-01 14:01:03.563084 | orchestrator | 2025-11-01 14:01:03 - testbed-manager 2025-11-01 14:01:03.649946 | orchestrator | 2025-11-01 14:01:03 - testbed-node-3 2025-11-01 14:01:03.735441 | orchestrator | 2025-11-01 14:01:03 - testbed-node-2 2025-11-01 14:01:03.818432 | orchestrator | 2025-11-01 14:01:03 - testbed-node-4 2025-11-01 14:01:03.907827 | orchestrator | 2025-11-01 14:01:03 - testbed-node-1 2025-11-01 14:01:04.004147 | orchestrator | 2025-11-01 14:01:04 - testbed-node-5 2025-11-01 14:01:04.100463 | orchestrator | 2025-11-01 14:01:04 - testbed-node-0 2025-11-01 14:01:04.201048 | orchestrator | 2025-11-01 14:01:04 - clean up keypairs 2025-11-01 14:01:04.220901 | orchestrator | 2025-11-01 14:01:04 - testbed 2025-11-01 14:01:04.246958 | orchestrator | 2025-11-01 14:01:04 - wait for servers to be gone 2025-11-01 14:01:12.944733 | orchestrator | 2025-11-01 14:01:12 - clean up ports 2025-11-01 14:01:13.128790 | orchestrator | 2025-11-01 14:01:13 - 070ad7d5-c61c-41fd-bdea-b245bbfa889a 2025-11-01 14:01:13.610995 | orchestrator | 2025-11-01 14:01:13 - 23198e66-ce83-412a-9bc8-fef2c04e5706 2025-11-01 14:01:13.846977 | orchestrator | 2025-11-01 14:01:13 - 5c1abd3f-70e3-4d4e-9903-8acb9bff4da6 2025-11-01 14:01:14.071425 | orchestrator | 2025-11-01 14:01:14 - 76aac4d6-e88d-4978-8e89-71d854760091 2025-11-01 14:01:14.305273 | orchestrator | 2025-11-01 14:01:14 - aeacec4c-3719-4127-a635-d5b42a0bf572 2025-11-01 14:01:14.546240 | orchestrator | 2025-11-01 14:01:14 - b50e9c39-8f81-4d27-8ebc-27dea5a807b0 2025-11-01 14:01:14.767338 | orchestrator | 2025-11-01 14:01:14 - b5d37c41-92ad-4eb0-a647-20e5795350e5 2025-11-01 14:01:14.992136 | orchestrator | 2025-11-01 14:01:14 - clean up volumes 2025-11-01 14:01:15.100928 | orchestrator | 2025-11-01 14:01:15 - testbed-volume-3-node-base 2025-11-01 14:01:15.139517 | orchestrator | 2025-11-01 14:01:15 - testbed-volume-2-node-base 2025-11-01 14:01:15.186338 | orchestrator | 2025-11-01 14:01:15 - testbed-volume-1-node-base 2025-11-01 14:01:15.229379 | orchestrator | 2025-11-01 14:01:15 - testbed-volume-4-node-base 2025-11-01 14:01:15.270537 | orchestrator | 2025-11-01 14:01:15 - testbed-volume-5-node-base 2025-11-01 14:01:15.310229 | orchestrator | 2025-11-01 14:01:15 - testbed-volume-0-node-base 2025-11-01 14:01:15.349850 | orchestrator | 2025-11-01 14:01:15 - testbed-volume-4-node-4 2025-11-01 14:01:15.394860 | orchestrator | 2025-11-01 14:01:15 - testbed-volume-3-node-3 2025-11-01 14:01:15.432885 | orchestrator | 2025-11-01 14:01:15 - testbed-volume-8-node-5 2025-11-01 14:01:15.477020 | orchestrator | 2025-11-01 14:01:15 - testbed-volume-6-node-3 2025-11-01 14:01:15.516702 | orchestrator | 2025-11-01 14:01:15 - testbed-volume-manager-base 2025-11-01 14:01:15.555754 | orchestrator | 2025-11-01 14:01:15 - testbed-volume-5-node-5 2025-11-01 14:01:15.595602 | orchestrator | 2025-11-01 14:01:15 - testbed-volume-2-node-5 2025-11-01 14:01:15.640040 | orchestrator | 2025-11-01 14:01:15 - testbed-volume-0-node-3 2025-11-01 14:01:15.684117 | orchestrator | 2025-11-01 14:01:15 - testbed-volume-1-node-4 2025-11-01 14:01:15.731042 | orchestrator | 2025-11-01 14:01:15 - testbed-volume-7-node-4 2025-11-01 14:01:15.773222 | orchestrator | 2025-11-01 14:01:15 - disconnect routers 2025-11-01 14:01:15.947143 | orchestrator | 2025-11-01 14:01:15 - testbed 2025-11-01 14:01:16.992708 | orchestrator | 2025-11-01 14:01:16 - clean up subnets 2025-11-01 14:01:17.064168 | orchestrator | 2025-11-01 14:01:17 - subnet-testbed-management 2025-11-01 14:01:17.299207 | orchestrator | 2025-11-01 14:01:17 - clean up networks 2025-11-01 14:01:17.476376 | orchestrator | 2025-11-01 14:01:17 - net-testbed-management 2025-11-01 14:01:18.253860 | orchestrator | 2025-11-01 14:01:18 - clean up security groups 2025-11-01 14:01:18.293538 | orchestrator | 2025-11-01 14:01:18 - testbed-node 2025-11-01 14:01:18.410186 | orchestrator | 2025-11-01 14:01:18 - testbed-management 2025-11-01 14:01:18.523737 | orchestrator | 2025-11-01 14:01:18 - clean up floating ips 2025-11-01 14:01:18.558877 | orchestrator | 2025-11-01 14:01:18 - 81.163.192.228 2025-11-01 14:01:18.908775 | orchestrator | 2025-11-01 14:01:18 - clean up routers 2025-11-01 14:01:19.031358 | orchestrator | 2025-11-01 14:01:19 - testbed 2025-11-01 14:01:20.263891 | orchestrator | ok: Runtime: 0:00:18.403060 2025-11-01 14:01:20.268160 | 2025-11-01 14:01:20.268309 | PLAY RECAP 2025-11-01 14:01:20.268416 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-11-01 14:01:20.268468 | 2025-11-01 14:01:20.399612 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-11-01 14:01:20.401669 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-11-01 14:01:21.124297 | 2025-11-01 14:01:21.124456 | PLAY [Cleanup play] 2025-11-01 14:01:21.140249 | 2025-11-01 14:01:21.140373 | TASK [Set cloud fact (Zuul deployment)] 2025-11-01 14:01:21.193357 | orchestrator | ok 2025-11-01 14:01:21.200911 | 2025-11-01 14:01:21.201039 | TASK [Set cloud fact (local deployment)] 2025-11-01 14:01:21.224625 | orchestrator | skipping: Conditional result was False 2025-11-01 14:01:21.234715 | 2025-11-01 14:01:21.234826 | TASK [Clean the cloud environment] 2025-11-01 14:01:22.326811 | orchestrator | 2025-11-01 14:01:22 - clean up servers 2025-11-01 14:01:22.821079 | orchestrator | 2025-11-01 14:01:22 - clean up keypairs 2025-11-01 14:01:22.834342 | orchestrator | 2025-11-01 14:01:22 - wait for servers to be gone 2025-11-01 14:01:22.874389 | orchestrator | 2025-11-01 14:01:22 - clean up ports 2025-11-01 14:01:22.944606 | orchestrator | 2025-11-01 14:01:22 - clean up volumes 2025-11-01 14:01:23.015586 | orchestrator | 2025-11-01 14:01:23 - disconnect routers 2025-11-01 14:01:23.043641 | orchestrator | 2025-11-01 14:01:23 - clean up subnets 2025-11-01 14:01:23.061361 | orchestrator | 2025-11-01 14:01:23 - clean up networks 2025-11-01 14:01:23.244888 | orchestrator | 2025-11-01 14:01:23 - clean up security groups 2025-11-01 14:01:23.280384 | orchestrator | 2025-11-01 14:01:23 - clean up floating ips 2025-11-01 14:01:23.302933 | orchestrator | 2025-11-01 14:01:23 - clean up routers 2025-11-01 14:01:23.770518 | orchestrator | ok: Runtime: 0:00:01.347535 2025-11-01 14:01:23.774390 | 2025-11-01 14:01:23.774533 | PLAY RECAP 2025-11-01 14:01:23.774635 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-11-01 14:01:23.774687 | 2025-11-01 14:01:23.893471 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-11-01 14:01:23.894444 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-11-01 14:01:24.614989 | 2025-11-01 14:01:24.615153 | PLAY [Base post-fetch] 2025-11-01 14:01:24.630374 | 2025-11-01 14:01:24.630502 | TASK [fetch-output : Set log path for multiple nodes] 2025-11-01 14:01:24.686118 | orchestrator | skipping: Conditional result was False 2025-11-01 14:01:24.699698 | 2025-11-01 14:01:24.699885 | TASK [fetch-output : Set log path for single node] 2025-11-01 14:01:24.749273 | orchestrator | ok 2025-11-01 14:01:24.758505 | 2025-11-01 14:01:24.758642 | LOOP [fetch-output : Ensure local output dirs] 2025-11-01 14:01:25.233563 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/0479b91f6fbb44c9a2b8080c3e05ad70/work/logs" 2025-11-01 14:01:25.496490 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0479b91f6fbb44c9a2b8080c3e05ad70/work/artifacts" 2025-11-01 14:01:25.765739 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0479b91f6fbb44c9a2b8080c3e05ad70/work/docs" 2025-11-01 14:01:25.784952 | 2025-11-01 14:01:25.785070 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-11-01 14:01:26.722362 | orchestrator | changed: .d..t...... ./ 2025-11-01 14:01:26.722690 | orchestrator | changed: All items complete 2025-11-01 14:01:26.722747 | 2025-11-01 14:01:27.464666 | orchestrator | changed: .d..t...... ./ 2025-11-01 14:01:28.210248 | orchestrator | changed: .d..t...... ./ 2025-11-01 14:01:28.240668 | 2025-11-01 14:01:28.240890 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-11-01 14:01:28.268040 | orchestrator | skipping: Conditional result was False 2025-11-01 14:01:28.270940 | orchestrator | skipping: Conditional result was False 2025-11-01 14:01:28.292658 | 2025-11-01 14:01:28.292757 | PLAY RECAP 2025-11-01 14:01:28.292830 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-11-01 14:01:28.292868 | 2025-11-01 14:01:28.409896 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-11-01 14:01:28.412271 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-11-01 14:01:29.152886 | 2025-11-01 14:01:29.153053 | PLAY [Base post] 2025-11-01 14:01:29.167529 | 2025-11-01 14:01:29.167682 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-11-01 14:01:30.119929 | orchestrator | changed 2025-11-01 14:01:30.130151 | 2025-11-01 14:01:30.130281 | PLAY RECAP 2025-11-01 14:01:30.130359 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-11-01 14:01:30.130433 | 2025-11-01 14:01:30.247231 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-11-01 14:01:30.248222 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-11-01 14:01:31.018474 | 2025-11-01 14:01:31.018645 | PLAY [Base post-logs] 2025-11-01 14:01:31.029227 | 2025-11-01 14:01:31.029359 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-11-01 14:01:31.470719 | localhost | changed 2025-11-01 14:01:31.480683 | 2025-11-01 14:01:31.480822 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-11-01 14:01:31.516856 | localhost | ok 2025-11-01 14:01:31.520962 | 2025-11-01 14:01:31.521083 | TASK [Set zuul-log-path fact] 2025-11-01 14:01:31.538526 | localhost | ok 2025-11-01 14:01:31.548672 | 2025-11-01 14:01:31.548783 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-11-01 14:01:31.585268 | localhost | ok 2025-11-01 14:01:31.591035 | 2025-11-01 14:01:31.591205 | TASK [upload-logs : Create log directories] 2025-11-01 14:01:32.099218 | localhost | changed 2025-11-01 14:01:32.104179 | 2025-11-01 14:01:32.104341 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-11-01 14:01:32.596641 | localhost -> localhost | ok: Runtime: 0:00:00.005268 2025-11-01 14:01:32.600770 | 2025-11-01 14:01:32.600880 | TASK [upload-logs : Upload logs to log server] 2025-11-01 14:01:33.155617 | localhost | Output suppressed because no_log was given 2025-11-01 14:01:33.159391 | 2025-11-01 14:01:33.159565 | LOOP [upload-logs : Compress console log and json output] 2025-11-01 14:01:33.218055 | localhost | skipping: Conditional result was False 2025-11-01 14:01:33.223062 | localhost | skipping: Conditional result was False 2025-11-01 14:01:33.230734 | 2025-11-01 14:01:33.230960 | LOOP [upload-logs : Upload compressed console log and json output] 2025-11-01 14:01:33.286252 | localhost | skipping: Conditional result was False 2025-11-01 14:01:33.287698 | 2025-11-01 14:01:33.290333 | localhost | skipping: Conditional result was False 2025-11-01 14:01:33.304592 | 2025-11-01 14:01:33.304792 | LOOP [upload-logs : Upload console log and json output]